Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-06-01 Thread Fei Long Wang
+1 for Zun, I love it and it's definitely a good container :)


On 02/06/16 15:46, Monty Taylor wrote:
> On 06/02/2016 06:29 AM, 秀才 wrote:
>> i suggest a name Zun :)
>> please see the reference: https://en.wikipedia.org/wiki/Zun
> It's available on pypi and launchpad. I especially love that one of the
> important examples is the "Four-goat square Zun"
>
> https://en.wikipedia.org/wiki/Zun#Four-goat_square_Zun
>
> I don't get a vote - but I vote for this one.
>
>> -- Original --
>> *From: * "Rochelle Grober";;
>> *Date: * Thu, Jun 2, 2016 09:47 AM
>> *To: * "OpenStack Development Mailing List (not for usage
>> questions)";
>> *Cc: * "Haruhiko Katou";
>> *Subject: * Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> Well, you could stick with the wine bottle analogy  and go with a bigger
>> size:
>>
>> Jeroboam
>> Methuselah
>> Salmanazar
>> Balthazar
>> Nabuchadnezzar
>>
>> --Rocky
>>
>> -Original Message-
>> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
>> Sent: Wednesday, June 01, 2016 3:44 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: Haruhiko Katou
>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> Thanks Shu for providing suggestions.
>>
>> I wanted the new name to be related to containers as Magnum is also
>> synonym for containers. So I have few options here.
>>
>> 1. Casket
>> 2. Canister
>> 3. Cistern
>> 4. Hutch
>>
>> All above options are free to be taken on pypi and Launchpad.
>> Thoughts?
>>
>> Regards
>> Madhuri
>>
>> -Original Message-
>> From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
>> Sent: Wednesday, June 1, 2016 11:11 AM
>> To: openstack-dev@lists.openstack.org
>> Cc: Haruhiko Katou 
>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> I found container related names and checked whether other project uses.
>>
>> https://en.wikipedia.org/wiki/Straddle_carrier
>> https://en.wikipedia.org/wiki/Suezmax
>> https://en.wikipedia.org/wiki/Twistlock
>>
>> These words are not used by other project on PYPI and Launchpad.
>>
>> ex.)
>> https://pypi.python.org/pypi/straddle
>> https://launchpad.net/straddle
>>
>>
>> However the chance of renaming in N cycle will be done by Infra-team on
>> this Friday, we would not meet the deadline. So
>>
>> 1. use 'Higgins' ('python-higgins' for package name) 2. consider other
>> name for next renaming chance (after a half year)
>>
>> Thoughts?
>>
>>
>> Regards,
>> Shu
>>
>>
>>> -Original Message-
>>> From: Hongbin Lu [mailto:hongbin...@huawei.com]
>>> Sent: Wednesday, June 01, 2016 11:37 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> 
>>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>>
>>> Shu,
>>>
>>> According to the feedback from the last team meeting, Gatling doesn't
>>> seem to be a suitable name. Are you able to find an alternative name?
>>>
>>> Best regards,
>>> Hongbin
>>>
 -Original Message-
 From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
 Sent: May-24-16 4:30 AM
 To: openstack-dev@lists.openstack.org
 Cc: Haruhiko Katou
 Subject: [openstack-dev] [higgins] Should we rename "Higgins"?

 Hi all,

 Unfortunately "higgins" is used by media server project on Launchpad
 and CI software on PYPI. Now, we use "python-higgins" for our
 project on Launchpad.

 IMO, we should rename project to prevent increasing points to patch.

 How about "Gatling"? It's only association from Magnum. It's not
 used on both Launchpad and PYPI.
 Is there any idea?

 Renaming opportunity will come (it seems only twice in a year) on
 Friday, June 3rd. Few projects will rename on this date.
 http://markmail.org/thread/ia3o3vz7mzmjxmcx

 And if project name issue will be fixed, I'd like to propose UI
 subproject.

 Thanks,
 Shu



>>> __
>>> _
 ___
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-
 requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> __
>>> 
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

[openstack-dev] [Monasca] "The server is currently unavailable. on Monasca service

2016-06-01 Thread Pradip Mukhopadhyay
Hello,


I am seeing the following issue after installing Monasca recently as a
devstack environment"


stack@scsor0002143001-pradipm:~/devstack$ monasca metric-list
ERROR (exc:80) exception: {"message": "The server is currently unavailable.
Please try again at a later time.\n\n\n", "code": "503 Service
Unavailable", "title": "Service Unavailable"}
HTTPException code=503 message={"message": "The server is currently
unavailable. Please try again at a later time.\n\n\n", "code":
"503 Service Unavailable", "title": "Service Unavailable"}


stack@scsor0002143001-pradipm:~/devstack$ openstack user list
+--+---+
| ID   | Name  |
+--+---+
| f29f1814fbc34855ba684a2810434580 | admin |
| a9c0c928caff4345b2aa266a30397487 | demo  |
| f172a9e25d2542c895fbc43d73164a9d | alt_demo  |
| 6cfdec260fe94c4487071f9e7915b359 | nova  |
| 1a66c71e946442f2a061c88cef566c30 | glance|
| bdb1aed519504b249cffd27a2c05f0fa | cinder|
| 5c9cf5cedcb244dbafbe272f691eaf2f | mini-mon  |
| a349c41c99fa447b96d1d2a56b12c3d4 | monasca-agent |
+--+---+


Here is the local.conf that I have used:



# BEGIN DEVSTACK LOCAL.CONF CONTENTS

[[local|localrc]]
ADMIN_PASSWORD=netapp
MYSQL_PASSWORD=$ADMIN_PASSWORD
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
GUEST_PASSWORD=$ADMIN_PASSWORD
MYSQL_HOST=127.0.0.1
MYSQL_USER=root
RABBIT_HOST=127.0.0.1
SERVICE_TOKEN=111222333444

LOGFILE=$DEST/logs/stack.sh.log
LOGDIR=$DEST/logs
LOG_COLOR=False
SCREEN_LOGDIR=$DEST/logs/screen
LOGDAYS=1

# The following two variables allow switching between Java and Python for
the implementations
# of the Monasca API and the Monasca Persister. If these variables are not
set, then the
# default is to install the Python implementations of both the Monasca API
and the Monasca Persister.

# Uncomment one of the following two lines to choose Java or Python for the
Monasca API.
#MONASCA_API_IMPLEMENTATION_LANG=${MONASCA_API_IMPLEMENTATION_LANG:-java}
MONASCA_API_IMPLEMENTATION_LANG=${MONASCA_API_IMPLEMENTATION_LANG:-python}

# Uncomment of the following two lines to choose Java or Python for the
Monasca Pesister.
#MONASCA_PERSISTER_IMPLEMENTATION_LANG=${MONASCA_PERSISTER_IMPLEMENTATION_LANG:-java}
MONASCA_PERSISTER_IMPLEMENTATION_LANG=${MONASCA_PERSISTER_IMPLEMENTATION_LANG:-python}

MONASCA_METRICS_DB=${MONASCA_METRICS_DB:-influxdb}
# MONASCA_METRICS_DB=${MONASCA_METRICS_DB:-vertica}

# This line will enable all of Monasca.
enable_plugin monasca-api git://git.openstack.org/openstack/monasca-api







--pradip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-01 Thread Kumari, Madhuri
Hi Hongbin,

I also liked the idea of having heterogeneous set of nodes but IMO such 
features should not be implemented in Magnum, thus deviating Magnum again from 
its roadmap. Whereas we should leverage Heat(or may be Senlin) APIs for the 
same.

I vote +1 for this feature.

Regards,
Madhuri

-Original Message-
From: Hongbin Lu [mailto:hongbin...@huawei.com] 
Sent: Thursday, June 2, 2016 3:33 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes

Personally, I think this is a good idea, since it can address a set of similar 
use cases like below:
* I want to deploy a k8s cluster to 2 availability zone (in future 2 
regions/clouds).
* I want to spin up N nodes in AZ1, M nodes in AZ2.
* I want to scale the number of nodes in specific AZ/region/cloud. For example, 
add/remove K nodes from AZ1 (with AZ2 untouched).

The use case above should be very common and universal everywhere. To address 
the use case, Magnum needs to support provisioning heterogeneous set of nodes 
at deploy time and managing them at runtime. It looks the proposed idea 
(manually managing individual nodes or individual group of nodes) can address 
this requirement very well. Besides the proposed idea, I cannot think of an 
alternative solution.

Therefore, I vote to support the proposed idea.

Best regards,
Hongbin

> -Original Message-
> From: Hongbin Lu
> Sent: June-01-16 11:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually 
> managing the bay nodes
> 
> Hi team,
> 
> A blueprint was created for tracking this idea:
> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> nodes . I won't approve the BP until there is a team decision on 
> accepting/rejecting the idea.
> 
> From the discussion in design summit, it looks everyone is OK with the 
> idea in general (with some disagreements in the API style). However, 
> from the last team meeting, it looks some people disagree with the 
> idea fundamentally. so I re-raised this ML to re-discuss.
> 
> If you agree or disagree with the idea of manually managing the Heat 
> stacks (that contains individual bay nodes), please write down your 
> arguments here. Then, we can start debating on that.
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > Sent: May-16-16 5:28 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually 
> > managing the bay nodes
> >
> > The discussion at the summit was very positive around this
> requirement
> > but as this change will make a large impact to Magnum it will need a 
> > spec.
> >
> > On the API of things, I was thinking a slightly more generic 
> > approach to incorporate other lifecycle operations into the same API.
> > Eg:
> > magnum bay-manage  
> >
> > magnum bay-manage  reset –hard
> > magnum bay-manage  rebuild
> > magnum bay-manage  node-delete  magnum bay-manage 
> >  node-add –flavor  magnum bay-manage  node-reset 
> >  magnum bay-manage  node-list
> >
> > Tom
> >
> > From: Yuanying OTSUKA 
> > Reply-To: "OpenStack Development Mailing List (not for usage 
> > questions)" 
> > Date: Monday, 16 May 2016 at 01:07
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually 
> > managing the bay nodes
> >
> > Hi,
> >
> > I think, user also want to specify the deleting node.
> > So we should manage “node” individually.
> >
> > For example:
> > $ magnum node-create —bay …
> > $ magnum node-list —bay
> > $ magnum node-delete $NODE_UUID
> >
> > Anyway, if magnum want to manage a lifecycle of container 
> > infrastructure.
> > This feature is necessary.
> >
> > Thanks
> > -yuanying
> >
> >
> > 2016年5月16日(月) 7:50 Hongbin Lu
> > >:
> > Hi all,
> >
> > This is a continued discussion from the design summit. For recap, 
> > Magnum manages bay nodes by using ResourceGroup from Heat. This 
> > approach works but it is infeasible to manage the heterogeneity
> across
> > bay nodes, which is a frequently demanded feature. As an example, 
> > there is a request to provision bay nodes across availability zones
> [1].
> > There is another request to provision bay nodes with different set 
> > of flavors [2]. For the request features above, ResourceGroup won’t 
> > work very well.
> >
> > The proposal is to remove the usage of ResourceGroup and manually 
> > create Heat stack for each bay nodes. For example, for creating a 
> > cluster with 2 masters and 3 minions, Magnum is going to manage 6
> Heat
> > stacks (instead of 1 big 

Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-06-01 Thread Monty Taylor
On 06/02/2016 06:29 AM, 秀才 wrote:
> i suggest a name Zun :)
> please see the reference: https://en.wikipedia.org/wiki/Zun

It's available on pypi and launchpad. I especially love that one of the
important examples is the "Four-goat square Zun"

https://en.wikipedia.org/wiki/Zun#Four-goat_square_Zun

I don't get a vote - but I vote for this one.

> -- Original --
> *From: * "Rochelle Grober";;
> *Date: * Thu, Jun 2, 2016 09:47 AM
> *To: * "OpenStack Development Mailing List (not for usage
> questions)";
> *Cc: * "Haruhiko Katou";
> *Subject: * Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
> Well, you could stick with the wine bottle analogy  and go with a bigger
> size:
> 
> Jeroboam
> Methuselah
> Salmanazar
> Balthazar
> Nabuchadnezzar
> 
> --Rocky
> 
> -Original Message-
> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> Sent: Wednesday, June 01, 2016 3:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Haruhiko Katou
> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
> Thanks Shu for providing suggestions.
> 
> I wanted the new name to be related to containers as Magnum is also
> synonym for containers. So I have few options here.
> 
> 1. Casket
> 2. Canister
> 3. Cistern
> 4. Hutch
> 
> All above options are free to be taken on pypi and Launchpad.
> Thoughts?
> 
> Regards
> Madhuri
> 
> -Original Message-
> From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> Sent: Wednesday, June 1, 2016 11:11 AM
> To: openstack-dev@lists.openstack.org
> Cc: Haruhiko Katou 
> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
> I found container related names and checked whether other project uses.
> 
> https://en.wikipedia.org/wiki/Straddle_carrier
> https://en.wikipedia.org/wiki/Suezmax
> https://en.wikipedia.org/wiki/Twistlock
> 
> These words are not used by other project on PYPI and Launchpad.
> 
> ex.)
> https://pypi.python.org/pypi/straddle
> https://launchpad.net/straddle
> 
> 
> However the chance of renaming in N cycle will be done by Infra-team on
> this Friday, we would not meet the deadline. So
> 
> 1. use 'Higgins' ('python-higgins' for package name) 2. consider other
> name for next renaming chance (after a half year)
> 
> Thoughts?
> 
> 
> Regards,
> Shu
> 
> 
>> -Original Message-
>> From: Hongbin Lu [mailto:hongbin...@huawei.com]
>> Sent: Wednesday, June 01, 2016 11:37 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
>>
>> Shu,
>>
>> According to the feedback from the last team meeting, Gatling doesn't
>> seem to be a suitable name. Are you able to find an alternative name?
>>
>> Best regards,
>> Hongbin
>>
>> > -Original Message-
>> > From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
>> > Sent: May-24-16 4:30 AM
>> > To: openstack-dev@lists.openstack.org
>> > Cc: Haruhiko Katou
>> > Subject: [openstack-dev] [higgins] Should we rename "Higgins"?
>> >
>> > Hi all,
>> >
>> > Unfortunately "higgins" is used by media server project on Launchpad
>> > and CI software on PYPI. Now, we use "python-higgins" for our
>> > project on Launchpad.
>> >
>> > IMO, we should rename project to prevent increasing points to patch.
>> >
>> > How about "Gatling"? It's only association from Magnum. It's not
>> > used on both Launchpad and PYPI.
>> > Is there any idea?
>> >
>> > Renaming opportunity will come (it seems only twice in a year) on
>> > Friday, June 3rd. Few projects will rename on this date.
>> > http://markmail.org/thread/ia3o3vz7mzmjxmcx
>> >
>> > And if project name issue will be fixed, I'd like to propose UI
>> > subproject.
>> >
>> > Thanks,
>> > Shu
>> >
>> >
>> >
>> __
>> _
>> > ___
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: OpenStack-dev-
>> > requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> 
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] Kilo end of life

2016-06-01 Thread Monty Taylor
On 06/01/2016 11:19 PM, Matt Riedemann wrote:
> FYI, I've got some changes up to project-config and the releases repo to
> mark kilo as end of life [1]. Andreas has done the same for the manuals.
> 
> The project-config changes remove the kilo periodic job definitions and
> usage from all repos, regardless of whether or not they are managed by
> the stable-maint-team or not, e.g. murano.
> 
> The projects which don't have a kilo-eol tag or final release from Dave
> Walker should do that using their own release process, however they have
> done stable branch EOL in the past, like for Juno.
> 
> [1] https://review.openstack.org/#/q/topic:kilo-eol
> 

\o/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-06-01 Thread ????
i suggest a name Zun :)
please see the reference: https://en.wikipedia.org/wiki/Zun


Regards
XiuCai


-- Original --
From:  "Rochelle Grober";;
Date:  Thu, Jun 2, 2016 09:47 AM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 
Cc:  "Haruhiko Katou"; 
Subject:  Re: [openstack-dev] [higgins] Should we rename "Higgins"?



Well, you could stick with the wine bottle analogy  and go with a bigger size:

Jeroboam
Methuselah
Salmanazar
Balthazar
Nabuchadnezzar

--Rocky

-Original Message-
From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com] 
Sent: Wednesday, June 01, 2016 3:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Haruhiko Katou
Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?

Thanks Shu for providing suggestions.

I wanted the new name to be related to containers as Magnum is also synonym for 
containers. So I have few options here.

1. Casket
2. Canister
3. Cistern
4. Hutch

All above options are free to be taken on pypi and Launchpad.
Thoughts?

Regards
Madhuri

-Original Message-
From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com] 
Sent: Wednesday, June 1, 2016 11:11 AM
To: openstack-dev@lists.openstack.org
Cc: Haruhiko Katou 
Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?

I found container related names and checked whether other project uses.

https://en.wikipedia.org/wiki/Straddle_carrier
https://en.wikipedia.org/wiki/Suezmax
https://en.wikipedia.org/wiki/Twistlock

These words are not used by other project on PYPI and Launchpad.

ex.)
https://pypi.python.org/pypi/straddle
https://launchpad.net/straddle


However the chance of renaming in N cycle will be done by Infra-team on this 
Friday, we would not meet the deadline. So

1. use 'Higgins' ('python-higgins' for package name) 2. consider other name for 
next renaming chance (after a half year)

Thoughts?


Regards,
Shu


> -Original Message-
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Wednesday, June 01, 2016 11:37 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
> Shu,
> 
> According to the feedback from the last team meeting, Gatling doesn't 
> seem to be a suitable name. Are you able to find an alternative name?
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> > Sent: May-24-16 4:30 AM
> > To: openstack-dev@lists.openstack.org
> > Cc: Haruhiko Katou
> > Subject: [openstack-dev] [higgins] Should we rename "Higgins"?
> >
> > Hi all,
> >
> > Unfortunately "higgins" is used by media server project on Launchpad 
> > and CI software on PYPI. Now, we use "python-higgins" for our 
> > project on Launchpad.
> >
> > IMO, we should rename project to prevent increasing points to patch.
> >
> > How about "Gatling"? It's only association from Magnum. It's not 
> > used on both Launchpad and PYPI.
> > Is there any idea?
> >
> > Renaming opportunity will come (it seems only twice in a year) on 
> > Friday, June 3rd. Few projects will rename on this date.
> > http://markmail.org/thread/ia3o3vz7mzmjxmcx
> >
> > And if project name issue will be fixed, I'd like to propose UI 
> > subproject.
> >
> > Thanks,
> > Shu
> >
> >
> >
> __
> _
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-01 Thread Na Zhu
Hi John,

Thanks your reply.

Seems you have covered everything :)
The development work can be broken down in 3 parts:
1, add ovn driver to networking-sfc
2, provide APIs in networking-ovn for networking-sfc 
3, implement the sfc in ovn

So what about we take part 1 and part 2, and you take part 3? because we 
are familiar with networking-sfc and networking-ovn and we can do it 
faster:)





Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN
Cc: Ryan Moats , OpenStack Development Mailing List 
, "disc...@openvswitch.org" 
, Srilatha Tangirala 
Date:   2016/06/01 23:26
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN



Na/Srilatha,

Great, I am working from three repos:

https://github.com/doonhammer/networking-sfc
https://github.com/doonhammer/networking-ovn
https://github.com/doonhammer/ovs

I had an original prototype working that used an API I created. Since 
then, based on feedback from everyone I have been moving the API to the 
networking-sfc model and then supporting that API in networking-ovn and 
ovs/ovn. I have created a new driver in networking-sfc for ovn.

I am in the process of moving networking-ovn and ovs to support the sfc 
model. Basically I am intending to pass a deep copy of the port-chain 
(sample attached, sfc_dict.py) from the ovn driver in networking-sfc to 
networking-ovn.  This , as Ryan pointed out will minimize the dependancies 
between networking-sfc and networking-ovn. I have created additional 
schema for ovs/ovn (attached) that will provide the linkage between 
networking-ovn and ovs/ovn. I have the schema in ovs/ovn and I am in the 
process of  updating my code to support it.

Not sure where you guys want to jump in �C but I can help in any way you 
need.

Regards

John

From: Na Zhu 
Date: Tuesday, May 31, 2016 at 9:02 PM
To: John McDowall 
Cc: Ryan Moats , OpenStack Development Mailing List <
openstack-dev@lists.openstack.org>, "disc...@openvswitch.org" <
disc...@openvswitch.org>, Srilatha Tangirala 
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN

+ Add Srilatha.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:Na Zhu/China/IBM
To:John McDowall 
Cc:Ryan Moats , OpenStack Development Mailing 
List , "disc...@openvswitch.org" <
disc...@openvswitch.org>
Date:2016/06/01 12:01
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN


John,

Thanks.

Me and Srilatha (srila...@us.ibm.com) want to working together with you, I 
know you already did some development works.
Can you tell me what you have done and put the latest code in your private 
repo?
Can we work out a plan and the remaining work?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)




From:John McDowall 
To:Ryan Moats 
Cc:OpenStack Development Mailing List <
openstack-dev@lists.openstack.org>, "disc...@openvswitch.org" <
disc...@openvswitch.org>
Date:2016/06/01 08:58
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN
Sent by:"discuss" 



Ryan,

More help is always great :-). As far as who to collaborate, what ever Is 
easiest for everyone �C I am pretty flexible.

Regards

John

From: Ryan Moats 
Date: Tuesday, May 31, 2016 at 1:59 PM
To: John McDowall 
Cc: Ben Pfaff , "disc...@openvswitch.org" <
disc...@openvswitch.org>, Justin Pettit , OpenStack 
Development Mailing List , Russell 
Bryant 
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
John McDowall  wrote on 05/31/2016 
03:21:30 PM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff , "disc...@openvswitch.org" 
> , Justin Pettit , 
> "OpenStack Development Mailing List"  d...@lists.openstack.org>, Russell Bryant 
> Date: 05/31/2016 03:22 PM
> 

Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-06-01 Thread Rochelle Grober
Well, you could stick with the wine bottle analogy  and go with a bigger size:

Jeroboam
Methuselah
Salmanazar
Balthazar
Nabuchadnezzar

--Rocky

-Original Message-
From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com] 
Sent: Wednesday, June 01, 2016 3:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Haruhiko Katou
Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?

Thanks Shu for providing suggestions.

I wanted the new name to be related to containers as Magnum is also synonym for 
containers. So I have few options here.

1. Casket
2. Canister
3. Cistern
4. Hutch

All above options are free to be taken on pypi and Launchpad.
Thoughts?

Regards
Madhuri

-Original Message-
From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com] 
Sent: Wednesday, June 1, 2016 11:11 AM
To: openstack-dev@lists.openstack.org
Cc: Haruhiko Katou 
Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?

I found container related names and checked whether other project uses.

https://en.wikipedia.org/wiki/Straddle_carrier
https://en.wikipedia.org/wiki/Suezmax
https://en.wikipedia.org/wiki/Twistlock

These words are not used by other project on PYPI and Launchpad.

ex.)
https://pypi.python.org/pypi/straddle
https://launchpad.net/straddle


However the chance of renaming in N cycle will be done by Infra-team on this 
Friday, we would not meet the deadline. So

1. use 'Higgins' ('python-higgins' for package name) 2. consider other name for 
next renaming chance (after a half year)

Thoughts?


Regards,
Shu


> -Original Message-
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Wednesday, June 01, 2016 11:37 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
> Shu,
> 
> According to the feedback from the last team meeting, Gatling doesn't 
> seem to be a suitable name. Are you able to find an alternative name?
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> > Sent: May-24-16 4:30 AM
> > To: openstack-dev@lists.openstack.org
> > Cc: Haruhiko Katou
> > Subject: [openstack-dev] [higgins] Should we rename "Higgins"?
> >
> > Hi all,
> >
> > Unfortunately "higgins" is used by media server project on Launchpad 
> > and CI software on PYPI. Now, we use "python-higgins" for our 
> > project on Launchpad.
> >
> > IMO, we should rename project to prevent increasing points to patch.
> >
> > How about "Gatling"? It's only association from Magnum. It's not 
> > used on both Launchpad and PYPI.
> > Is there any idea?
> >
> > Renaming opportunity will come (it seems only twice in a year) on 
> > Friday, June 3rd. Few projects will rename on this date.
> > http://markmail.org/thread/ia3o3vz7mzmjxmcx
> >
> > And if project name issue will be fixed, I'd like to propose UI 
> > subproject.
> >
> > Thanks,
> > Shu
> >
> >
> >
> __
> _
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-06-01 Thread Yang, Yi Y
Replies inline, please check.

-Original Message-
From: Elzur, Uri [mailto:uri.el...@intel.com] 
Sent: Thursday, June 02, 2016 9:19 AM
To: OpenStack Development Mailing List (not for usage questions) 
; Cathy Zhang ; 
b...@ovn.org
Cc: Jesse Gross ; Jiri Benc 
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

Few comments below

Thx

Uri ("Oo-Ree")
C: 949-378-7568

-Original Message-
From: Yang, Yi Y [mailto:yi.y.y...@intel.com]
Sent: Wednesday, June 1, 2016 5:20 PM
To: Cathy Zhang ; OpenStack Development Mailing List 
(not for usage questions) ; b...@ovn.org
Cc: Jesse Gross ; Jiri Benc 
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

Also cc to Jiri and Jesse, I think mandatory L3 requirement is not reasonable 
for tunnel port, say VxLAN or VxLAN-gpe, its intention is to L2 over L3, so L2 
header is must-have, but mandatory L3 requirement removed L2.
[UE] pls add more context

[Yi Yang] In current Linux kernel, VxLAN-gpe port is a L3 port, that means one 
packet with L2 will be popped eth header by implicit pop_eth when it is output 
to such port. But I think this is inappropriate, VxLAN-gpe can transfer L2 
packet as VxLAN does, we can't force it to work in L3 mode.

I also think VxLAN + Eth + NSH + Original frame should be an option, at least 
industries have such requirements in practice.

So my point is it will be great if we can support both 
VxLAN-gpe+ETH+NSH+Original L2 and VxLAN+ETH+NSH+Original L2, this will simplify 
our nsh patches upstreaming efforts and speed up merging.
[UE] this " VxLAN+ETH+NSH+Original L2" can be a local packet (i.e. SFF to SF on 
a 'local circuit") IFF OS kernels and SFs will support it, but not sure how it 
can travel on the wire... what is in that added ETH header? 

[Yi Yang] This ETH is from inner L2 (Original L2), but ether_type is 0x894f

[UE] did you mean " VxLAN-gpe+NSH+Original L2" or  " VxLAN-gpe+ETH+NSH+Original 
L2"? The latter is not the packet on the wire

[Yi Yang] Current ovs implementation requires the packet from tunnel port must 
be Ethernet packet, so we have to use VxLAN-gpe+Eth+Nsh+Original packet, I know 
hardware devices only recognize "VxLAN-gpe+NSH+Original L2".


-Original Message-
From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Thursday, June 02, 2016 2:54 AM
To: OpenStack Development Mailing List (not for usage questions) 
; b...@ovn.org; Yang, Yi Y 

Cc: Cathy Zhang 
Subject: RE: [openstack-dev] [Neutron] support of NSH in networking-SFC

Looks like the work of removing the mandatory L3 requirement associated with 
decapsulated VxLAN-gpe packet also involves OVS kernel change, which is 
difficult. Furthermore, even this blocking issue is resolved and eventually OVS 
accepts the VLAN-gpe+NSH encapsulation, there is still another issue. 
Current Neutron only supports VXLAN, not VXLAN-gpe, and adopting VXLAN-gpe 
involves consideration of backward compatibility with existing VXLAN VTEP and 
VXLAN Gateway. 

An alternative and maybe easier/faster path could be to push a patch of " VxLAN 
+ Eth + NSH + Original frame" into OVS kernel. This is also IETF compliant 
encapsulation for SFC and does not have the L3 requirement issue and Neutron 
VXLAN-gpe support issue. 

We can probably take this discussion to the OVS mailing alias. 

Thanks,
Cathy

-Original Message-
From: Ben Pfaff [mailto:b...@ovn.org]
Sent: Tuesday, May 31, 2016 9:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On Wed, Jun 01, 2016 at 12:08:23AM +, Yang, Yi Y wrote:
> Ben, yes, we submitted nsh support patch set last year, but ovs 
> community told me we have to push kernel part into Linux kernel tree, 
> we're struggling to do this, but something blocked us from doing this.

It's quite difficult to get patches for a new protocol into the kernel.
You have my sympathy.

> Recently, ovs did some changes in tunnel protocols which requires the 
> packet decapsulated by a tunnel must be a Ethernet packet, but Linux 
> kernel (net-next) tree accepted VxLAN-gpe patch set from Redhat guy 
> (Jiri Benc) which requires the packet decapsulated by VxLAN-gpe port 
> must be L3 packet but not L2 Ethernet packet, this blocked us from 
> progressing better.
> 
> Simon Horman (Netronome guy) has posted a series of patches to remove 
> the mandatory requirement from ovs in order that the packet from a 
> tunnel can be any packet, but so far we didn't see they are merged.

These are slowly working their way through OVS review, but these also have a 
prerequisite on kernel patches, so it's not easy to get them in either.

> I heard ovs 

Re: [openstack-dev] [TripleO][diskimage-builder] Proposing Stephane Miller to dib-core

2016-06-01 Thread Matthew Thode
On 06/01/2016 12:50 PM, Gregory Haynes wrote:
> Hello everyone,
> 
> I'd like to propose adding Stephane Miller (cinerama) to the
> diskimage-builder core team. She has been a huge help with our reviews
> for some time now and I think she would make a great addition to our
> core team. I know I have benefited a lot from her bash expertise in many
> of my reviews and I am sure others have as well :).
> 
> I've spoken with many of the active cores privately and only received
> positive feedback on this, so rather than use this as an all out vote
> (although feel free to add your ++'s) I'd like to use this as a final
> call out in case any objections are wanting to be made. If none have
> been made by next Wednesday (6/8) I'll go ahead and add her to dib-core.
> 
> Cheers,
> Greg
> 
I'm good with it, not a core, but +1 from me.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kolla] About kolla-ansible reconfigure

2016-06-01 Thread hu . zhijiang
Hi 

After modifying the kolla_internal_vip_address in /etc/kolla/global.yml , 
I use kolla-ansible reconfigure to reconfigure OpenStack. But I got the 
following error.

TASK: [mariadb | Restart containers] 
**
skipping: [localhost] => (item=[{'group': 'mariadb', 'name': 'mariadb'}, 
{'KOLLA_BASE_DISTRO': 'centos', 'PS1': '$(tput bold)($(printenv 
KOLLA_SERVICE_NAME))$(tput sgr0)[$(id -un)@$(hostname -s) $(pwd)]$ ', 
'KOLLA_INSTALL_TYPE': 'binary', 'changed': False, 'item': {'group': 
'mariadb', 'name': 'mariadb'}, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 
'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 
'invocation': {'module_name': u'kolla_docker', 'module_complex_args': 
{'action': 'get_container_env', 'name': u'mariadb'}, 'module_args': ''}, 
'KOLLA_SERVICE_NAME': 'mariadb', 'KOLLA_INSTALL_METATYPE': 'rdo'}, {'cmd': 
['docker', 'exec', 'mariadb', '/usr/local/bin/kolla_set_configs', 
'--check'], 'end': '2016-06-02 11:32:18.866276', 'stderr': 
'INFO:__main__:Loading config file at 
/var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config 
file\nINFO:__main__:The config files are in the expected state', 'stdout': 
u'', 'item': {'group': 'mariadb', 'name': 'mariadb'}, 'changed': False, 
'rc': 0, 'failed': False, 'warnings': [], 'delta': '0:00:00.075316', 
'invocation': {'module_name': u'command', 'module_complex_args': {}, 
'module_args': u'docker exec mariadb /usr/local/bin/kolla_set_configs 
--check'}, 'stdout_lines': [], 'failed_when_result': False, 'start': 
'2016-06-02 11:32:18.790960'}])

TASK: [mariadb | Waiting for MariaDB service to be ready through VIP] 
*
failed: [localhost] => {"attempts": 6, "changed": false, "cmd": ["docker", 
"exec", "mariadb", "mysql", "-h", "10.43.114.148/24", "-u", "haproxy", 
"-e", "show databases;"], "delta": "0:00:03.924516", "end": "2016-06-02 
11:33:57.928139", "failed": true, "rc": 1, "start": "2016-06-02 
11:33:54.003623", "stdout_lines": [], "warnings": []}
stderr: ERROR 2005 (HY000): Unknown MySQL server host '10.43.114.148/24' 
(-2)
msg: Task failed as maximum retries was encountered

FATAL: all hosts have already failed -- aborting


It seems that mariadb was not restart as expected.

ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-06-01 Thread Elzur, Uri
Few comments below

Thx

Uri ("Oo-Ree")
C: 949-378-7568

-Original Message-
From: Yang, Yi Y [mailto:yi.y.y...@intel.com] 
Sent: Wednesday, June 1, 2016 5:20 PM
To: Cathy Zhang ; OpenStack Development Mailing List 
(not for usage questions) ; b...@ovn.org
Cc: Jesse Gross ; Jiri Benc 
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

Also cc to Jiri and Jesse, I think mandatory L3 requirement is not reasonable 
for tunnel port, say VxLAN or VxLAN-gpe, its intention is to L2 over L3, so L2 
header is must-have, but mandatory L3 requirement removed L2.
[UE] pls add more context

I also think VxLAN + Eth + NSH + Original frame should be an option, at least 
industries have such requirements in practice.

So my point is it will be great if we can support both 
VxLAN-gpe+ETH+NSH+Original L2 and VxLAN+ETH+NSH+Original L2, this will simplify 
our nsh patches upstreaming efforts and speed up merging.
[UE] this " VxLAN+ETH+NSH+Original L2" can be a local packet (i.e. SFF to SF on 
a 'local circuit") IFF OS kernels and SFs will support it, but not sure how it 
can travel on the wire... what is in that added ETH header? 
[UE] did you mean " VxLAN-gpe+NSH+Original L2" or  " VxLAN-gpe+ETH+NSH+Original 
L2"? The latter is not the packet on the wire


-Original Message-
From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Thursday, June 02, 2016 2:54 AM
To: OpenStack Development Mailing List (not for usage questions) 
; b...@ovn.org; Yang, Yi Y 

Cc: Cathy Zhang 
Subject: RE: [openstack-dev] [Neutron] support of NSH in networking-SFC

Looks like the work of removing the mandatory L3 requirement associated with 
decapsulated VxLAN-gpe packet also involves OVS kernel change, which is 
difficult. Furthermore, even this blocking issue is resolved and eventually OVS 
accepts the VLAN-gpe+NSH encapsulation, there is still another issue. 
Current Neutron only supports VXLAN, not VXLAN-gpe, and adopting VXLAN-gpe 
involves consideration of backward compatibility with existing VXLAN VTEP and 
VXLAN Gateway. 

An alternative and maybe easier/faster path could be to push a patch of " VxLAN 
+ Eth + NSH + Original frame" into OVS kernel. This is also IETF compliant 
encapsulation for SFC and does not have the L3 requirement issue and Neutron 
VXLAN-gpe support issue. 

We can probably take this discussion to the OVS mailing alias. 

Thanks,
Cathy

-Original Message-
From: Ben Pfaff [mailto:b...@ovn.org]
Sent: Tuesday, May 31, 2016 9:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On Wed, Jun 01, 2016 at 12:08:23AM +, Yang, Yi Y wrote:
> Ben, yes, we submitted nsh support patch set last year, but ovs 
> community told me we have to push kernel part into Linux kernel tree, 
> we're struggling to do this, but something blocked us from doing this.

It's quite difficult to get patches for a new protocol into the kernel.
You have my sympathy.

> Recently, ovs did some changes in tunnel protocols which requires the 
> packet decapsulated by a tunnel must be a Ethernet packet, but Linux 
> kernel (net-next) tree accepted VxLAN-gpe patch set from Redhat guy 
> (Jiri Benc) which requires the packet decapsulated by VxLAN-gpe port 
> must be L3 packet but not L2 Ethernet packet, this blocked us from 
> progressing better.
> 
> Simon Horman (Netronome guy) has posted a series of patches to remove 
> the mandatory requirement from ovs in order that the packet from a 
> tunnel can be any packet, but so far we didn't see they are merged.

These are slowly working their way through OVS review, but these also have a 
prerequisite on kernel patches, so it's not easy to get them in either.

> I heard ovs community looks forward to getting nsh patches merged, it 
> will be great if ovs guys can help progress this.

I do plan to do my part in review (but much of this is kernel review, which I'm 
not really involved in anymore).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [kolla] Ansible 2.0.0 functional

2016-06-01 Thread David Moreau Simard
2.0, 2.0.1.0 and 2.1 were all fine for me for the most part.
2.0.2.0 was a regression disaster for us (yup, a 0.0.1 increment) but
thankfully they fixed the issues for 2.1.

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Wed, Jun 1, 2016 at 1:44 AM, Joshua Harlow  wrote:
> Out of curiosity, what keeps on changing (breaking?) in ansible that makes
> it so that something working in 2.0 doesn't work in 2.1? Isn't the point of
> minor version numbers like that so that things in the same major version
> number still actually work...
>
> Steven Dake (stdake) wrote:
>>
>> Hey folks,
>>
>> In case you haven't been watching the review queue, Kolla has been
>> ported to Ansible 2.0. It does not work with Ansible 2.1, however.
>>
>> Regards,
>> -steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday June 2nd at 9:00 UTC

2016-06-01 Thread Masayuki Igawa
Hello everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, June 2nd at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_June_2nd_2016_.280900_UTC.29

Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones the
next meeting will be at:

04:00 EST
18:00 JST
18:30 ACST
11:00 CEST
04:00 CDT
02:00 PDT

Best Regards,
-- Masayuki Igawa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-06-01 Thread Yang, Yi Y
Indeed, but I saw an exceptional case, LISP, it is in ovs but not in Linux 
kernel. For our nsh patches, kernel is easier than ovs.

-Original Message-
From: Ben Pfaff [mailto:b...@ovn.org] 
Sent: Thursday, June 02, 2016 7:17 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

I'm probably the wrong person to give advice on kernel development, since I 
haven't been involved in it for years.  I just know that it's difficult, and 
not always because of the code.

It's hard to support a protocol in OVS before it's supported in the kernel, 
since userspace without a kernel implementation is not very useful.

On Wed, Jun 01, 2016 at 09:59:12PM +, Elzur, Uri wrote:
> Hi Ben
> 
> Any guidance you can offer will be appreciated. The process has taken long 
> time and precious cycles. How can we get to a coordinated Kernel and OvS 
> approach to avoid the challenges and potentially misaligned advise we got 
> (per Yi Yang's mail)?
> 
> Thx
> 
> Uri ("Oo-Ree")
> C: 949-378-7568
> 
> 
> -Original Message-
> From: Ben Pfaff [mailto:b...@ovn.org]
> Sent: Tuesday, May 31, 2016 9:48 PM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] [Neutron] support of NSH in 
> networking-SFC
> 
> On Wed, Jun 01, 2016 at 12:08:23AM +, Yang, Yi Y wrote:
> > Ben, yes, we submitted nsh support patch set last year, but ovs 
> > community told me we have to push kernel part into Linux kernel 
> > tree, we're struggling to do this, but something blocked us from doing this.
> 
> It's quite difficult to get patches for a new protocol into the kernel.
> You have my sympathy.
> 
> > Recently, ovs did some changes in tunnel protocols which requires 
> > the packet decapsulated by a tunnel must be a Ethernet packet, but 
> > Linux kernel (net-next) tree accepted VxLAN-gpe patch set from 
> > Redhat guy (Jiri Benc) which requires the packet decapsulated by 
> > VxLAN-gpe port must be L3 packet but not L2 Ethernet packet, this 
> > blocked us from progressing better.
> > 
> > Simon Horman (Netronome guy) has posted a series of patches to 
> > remove the mandatory requirement from ovs in order that the packet 
> > from a tunnel can be any packet, but so far we didn't see they are merged.
> 
> These are slowly working their way through OVS review, but these also have a 
> prerequisite on kernel patches, so it's not easy to get them in either.
> 
> > I heard ovs community looks forward to getting nsh patches merged, 
> > it will be great if ovs guys can help progress this.
> 
> I do plan to do my part in review (but much of this is kernel review, which 
> I'm not really involved in anymore).
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-06-01 Thread Yang, Yi Y
Also cc to Jiri and Jesse, I think mandatory L3 requirement is not reasonable 
for tunnel port, say VxLAN or VxLAN-gpe, its intention is to L2 over L3, so L2 
header is must-have, but mandatory L3 requirement removed L2.

I also think VxLAN + Eth + NSH + Original frame should be an option, at least 
industries have such requirements in practice.

So my point is it will be great if we can support both 
VxLAN-gpe+ETH+NSH+Original L2 and VxLAN+ETH+NSH+Original L2, this will simplify 
our nsh patches upstreaming efforts and speed up merging.

-Original Message-
From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com] 
Sent: Thursday, June 02, 2016 2:54 AM
To: OpenStack Development Mailing List (not for usage questions) 
; b...@ovn.org; Yang, Yi Y 

Cc: Cathy Zhang 
Subject: RE: [openstack-dev] [Neutron] support of NSH in networking-SFC

Looks like the work of removing the mandatory L3 requirement associated with 
decapsulated VxLAN-gpe packet also involves OVS kernel change, which is 
difficult. Furthermore, even this blocking issue is resolved and eventually OVS 
accepts the VLAN-gpe+NSH encapsulation, there is still another issue. 
Current Neutron only supports VXLAN, not VXLAN-gpe, and adopting VXLAN-gpe 
involves consideration of backward compatibility with existing VXLAN VTEP and 
VXLAN Gateway. 

An alternative and maybe easier/faster path could be to push a patch of " VxLAN 
+ Eth + NSH + Original frame" into OVS kernel. This is also IETF compliant 
encapsulation for SFC and does not have the L3 requirement issue and Neutron 
VXLAN-gpe support issue. 

We can probably take this discussion to the OVS mailing alias. 

Thanks,
Cathy

-Original Message-
From: Ben Pfaff [mailto:b...@ovn.org]
Sent: Tuesday, May 31, 2016 9:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On Wed, Jun 01, 2016 at 12:08:23AM +, Yang, Yi Y wrote:
> Ben, yes, we submitted nsh support patch set last year, but ovs 
> community told me we have to push kernel part into Linux kernel tree, 
> we're struggling to do this, but something blocked us from doing this.

It's quite difficult to get patches for a new protocol into the kernel.
You have my sympathy.

> Recently, ovs did some changes in tunnel protocols which requires the 
> packet decapsulated by a tunnel must be a Ethernet packet, but Linux 
> kernel (net-next) tree accepted VxLAN-gpe patch set from Redhat guy 
> (Jiri Benc) which requires the packet decapsulated by VxLAN-gpe port 
> must be L3 packet but not L2 Ethernet packet, this blocked us from 
> progressing better.
> 
> Simon Horman (Netronome guy) has posted a series of patches to remove 
> the mandatory requirement from ovs in order that the packet from a 
> tunnel can be any packet, but so far we didn't see they are merged.

These are slowly working their way through OVS review, but these also have a 
prerequisite on kernel patches, so it's not easy to get them in either.

> I heard ovs community looks forward to getting nsh patches merged, it 
> will be great if ovs guys can help progress this.

I do plan to do my part in review (but much of this is kernel review, which I'm 
not really involved in anymore).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Tooling for recovering nodes

2016-06-01 Thread Jay Faulkner

Some comments inline.


On 5/31/16 12:26 PM, Devananda van der Veen wrote:

On 05/31/2016 01:35 AM, Dmitry Tantsur wrote:

On 05/31/2016 10:25 AM, Tan, Lin wrote:

Hi,

Recently, I am working on a spec[1] in order to recover nodes which get stuck
in deploying state, so I really expect some feedback from you guys.

Ironic nodes can be stuck in
deploying/deploywait/cleaning/cleanwait/inspecting/deleting if the node is
reserved by a dead conductor (the exclusive lock was not released).
Any further requests will be denied by ironic because it thinks the node
resource is under control of another conductor.

To be more clear, let's narrow the scope and focus on the deploying state
first. Currently, people do have several choices to clear the reserved lock:
1. restart the dead conductor
2. wait up to 2 or 3 minutes and _check_deploying_states() will clear the lock.
3. The operator touches the DB to manually recover these nodes.

Option two looks very promising but there are some weakness:
2.1 It won't work if the dead conductor was renamed or deleted.
2.2 It won't work if the node's specific driver was not enabled on live
conductors.
2.3 It won't work if the node is in maintenance. (only a corner case).

We can and should fix all three cases.

2.1 and 2.2 appear to be a bug in the behavior of _check_deploying_status().

The method claims to do exactly what you suggest in 2.1 and 2.2 -- it gathers a
list of Nodes reserved by *any* offline conductor and tries to release the lock.
However, it will always fail to update them, because objects.Node.release()
raises a NodeLocked exception when called on a Node locked by a different 
conductor.

Here's the relevant code path:

ironic/conductor/manager.py:
1259 def _check_deploying_status(self, context):
...
1269 offline_conductors = self.dbapi.get_offline_conductors()
...
1273 node_iter = self.iter_nodes(
1274 fields=['id', 'reservation'],
1275 filters={'provision_state': states.DEPLOYING,
1276  'maintenance': False,
1277  'reserved_by_any_of': offline_conductors})
...
1281 for node_uuid, driver, node_id, conductor_hostname in node_iter:
1285 try:
1286 objects.Node.release(context, conductor_hostname, node_id)
...
1292 except exception.NodeLocked:
1293 LOG.warning(...)
1297 continue


As far as 2.3, I think we should change the query string at the start of this
method so that it includes nodes in maintenance mode. I think it's both safe and
reasonable (and, frankly, what an operator will expect) that a node which is in
maintenance mode, and in DEPLOYING state, whose conductor is offline, should
have that reservation cleared and be set to DEPLOYFAILED state.


This is an excellent idea -- and I'm going to extend it further. If I 
have any nodes in a *ING state, and they are put into maintenance, it 
should force a failure. This is potentially a more API-friendly way of 
cleaning up nodes in bad states -- an operator would need to maintenance 
the node, and once it enters the *FAIL state, troubleshoot why it 
failed, unmaintenance, and return to production.


I obviously strongly desire an "override command" as an operator, but I 
really think this could handle a large percentage of the use cases that 
made me desire it in the first place.



--devananda


Definitely we should improve the option 2, but there are could be more issues
I didn't know in a more complicated environment.
So my question is do we still need a new command to recover these node easier
without accessing DB, like this PoC [2]:
   ironic-noderecover --node_uuids=UUID1,UUID2
--config-file=/etc/ironic/ironic.conf

I'm -1 to anything silently removing the lock until I see a clear use case which
is impossible to improve within Ironic itself. Such utility may and will be 
abused.

I'm fine with anything that does not forcibly remove the lock by default.
I agree such a utility could be abused. I don't think that's a good 
argument for not writing it for operators. However, I agree that any 
utility we write that could or would modify a lock should not do so by 
default, and should warn before doing so, but there are cases where 
getting a lock cleared is desirable and necessary.


A good example of this would be an ironic-conductor failing while a node 
is locked, and being brought up with a different hostname. Today, 
there's no way to get that lock off that node again.


Even if you force operators to replace a conductor with one with an 
identical hostname, during the time this replacement was occurring any 
nodes locked would remain locked.


Thanks,
Jay Faulkner

Best Regards,

Tan


[1] https://review.openstack.org/#/c/319812
[2] https://review.openstack.org/#/c/311273/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [ironic] Tooling for recovering nodes

2016-06-01 Thread Jay Faulkner

Hey Tan, some comments inline.


On 5/31/16 1:25 AM, Tan, Lin wrote:

Hi,

Recently, I am working on a spec[1] in order to recover nodes which get stuck 
in deploying state, so I really expect some feedback from you guys.

Ironic nodes can be stuck in 
deploying/deploywait/cleaning/cleanwait/inspecting/deleting if the node is 
reserved by a dead conductor (the exclusive lock was not released).
Any further requests will be denied by ironic because it thinks the node 
resource is under control of another conductor.

To be more clear, let's narrow the scope and focus on the deploying state 
first. Currently, people do have several choices to clear the reserved lock:
1. restart the dead conductor
2. wait up to 2 or 3 minutes and _check_deploying_states() will clear the lock.
3. The operator touches the DB to manually recover these nodes.
I actually like option #3 being optionally integrated into a tool to 
clear nodes stuck in *ing state. If specified, it would clear the lock 
on the deploy as it moved it from DEPLOYING -> DEPLOYFAILED. Obviously, 
for cleaning this could be dangerous, and should be documented as so -- 
imagine clearing a lock mid-firmware flash and having a power action 
taken to brick the node.


Given this is tooling intended to handle many cases, I think it's better 
to give the operator the choice to take more dramatic action if they wish.



Thanks,
Jay Faulkner

Option two looks very promising but there are some weakness:
2.1 It won't work if the dead conductor was renamed or deleted.
2.2 It won't work if the node's specific driver was not enabled on live 
conductors.
2.3 It won't work if the node is in maintenance. (only a corner case).

Definitely we should improve the option 2, but there are could be more issues I 
didn't know in a more complicated environment.
So my question is do we still need a new command to recover these node easier 
without accessing DB, like this PoC [2]:
   ironic-noderecover --node_uuids=UUID1,UUID2  
--config-file=/etc/ironic/ironic.conf

Best Regards,

Tan


[1] https://review.openstack.org/#/c/319812
[2] https://review.openstack.org/#/c/311273/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-06-01 Thread Ben Pfaff
I'm probably the wrong person to give advice on kernel development,
since I haven't been involved in it for years.  I just know that it's
difficult, and not always because of the code.

It's hard to support a protocol in OVS before it's supported in the
kernel, since userspace without a kernel implementation is not very
useful.

On Wed, Jun 01, 2016 at 09:59:12PM +, Elzur, Uri wrote:
> Hi Ben
> 
> Any guidance you can offer will be appreciated. The process has taken long 
> time and precious cycles. How can we get to a coordinated Kernel and OvS 
> approach to avoid the challenges and potentially misaligned advise we got 
> (per Yi Yang's mail)?
> 
> Thx
> 
> Uri ("Oo-Ree")
> C: 949-378-7568
> 
> 
> -Original Message-
> From: Ben Pfaff [mailto:b...@ovn.org] 
> Sent: Tuesday, May 31, 2016 9:48 PM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
> 
> On Wed, Jun 01, 2016 at 12:08:23AM +, Yang, Yi Y wrote:
> > Ben, yes, we submitted nsh support patch set last year, but ovs 
> > community told me we have to push kernel part into Linux kernel tree, 
> > we're struggling to do this, but something blocked us from doing this.
> 
> It's quite difficult to get patches for a new protocol into the kernel.
> You have my sympathy.
> 
> > Recently, ovs did some changes in tunnel protocols which requires the 
> > packet decapsulated by a tunnel must be a Ethernet packet, but Linux 
> > kernel (net-next) tree accepted VxLAN-gpe patch set from Redhat guy 
> > (Jiri Benc) which requires the packet decapsulated by VxLAN-gpe port 
> > must be L3 packet but not L2 Ethernet packet, this blocked us from 
> > progressing better.
> > 
> > Simon Horman (Netronome guy) has posted a series of patches to remove 
> > the mandatory requirement from ovs in order that the packet from a 
> > tunnel can be any packet, but so far we didn't see they are merged.
> 
> These are slowly working their way through OVS review, but these also have a 
> prerequisite on kernel patches, so it's not easy to get them in either.
> 
> > I heard ovs community looks forward to getting nsh patches merged, it 
> > will be great if ovs guys can help progress this.
> 
> I do plan to do my part in review (but much of this is kernel review, which 
> I'm not really involved in anymore).
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Ansible 2.0.0 functional

2016-06-01 Thread David Shrewsbury
Jeffrey,

I'm not experienced enough in writing plugins to give you a good answer.
You should pop into #ansible-devel
and ask your question there.

-Dave

On Wed, Jun 1, 2016 at 5:59 PM, Jeffrey Zhang 
wrote:

> ​David,
>
> is there any standard way to create the _tmp path? ansible 2.0 and ansible
> 2.1 have
> not the same function signature. [0]
>
> [0]​
> https://github.com/openstack/kolla/blob/master/ansible/action_plugins/merge_configs.py#L45,L54
>
> On Wed, Jun 1, 2016 at 9:53 PM, David Shrewsbury <
> shrewsbury.d...@gmail.com> wrote:
>
>> I believe 2.1 was when Toshio introduced the new ziploader, which totally
>> changed how
>> the modules were loaded:
>>
>> https://github.com/ansible/ansible/pull/15246
>>
>> That broke a few of the 2.x OpenStack modules, too. But that was mostly
>> our fault as
>> we didn't code some of them correctly to the proper standards.
>>
>> -Dave
>>
>>
>> On Wed, Jun 1, 2016 at 6:10 AM, Jeffrey Zhang 
>> wrote:
>>
>>> 1. the ansible 2.1 make lots of change compare to the ansible 2.0 about
>>> how
>>>the plugin works. So in default, kolla do not work with ansible 2.1
>>>
>>> 2. the compatible fix is merged here[0]. So the kolla works on both
>>>ansible 2.1 and ansible 2.0
>>>
>>> [0] https://review.openstack.org/321754
>>>
>>> On Wed, Jun 1, 2016 at 2:46 PM, Monty Taylor 
>>> wrote:
>>>
 On 06/01/2016 08:44 AM, Joshua Harlow wrote:
 > Out of curiosity, what keeps on changing (breaking?) in ansible that
 > makes it so that something working in 2.0 doesn't work in 2.1? Isn't
 the
 > point of minor version numbers like that so that things in the same
 > major version number still actually work...

 I'm also curious to know the answer to this. I expect the 2.0 port to
 have had the possibility of breaking things - I do not expect 2.0 to 2.1
 to be similar. Which is not to say you're wrong about it not working,
 but rather, it would be good to understand what broke so that we can
 better track it in upstream ansible.

 > Steven Dake (stdake) wrote:
 >> Hey folks,
 >>
 >> In case you haven't been watching the review queue, Kolla has been
 >> ported to Ansible 2.0. It does not work with Ansible 2.1, however.
 >>
 >> Regards,
 >> -steve
 >>
 >>
 __
 >>
 >> OpenStack Development Mailing List (not for usage questions)
 >> Unsubscribe:
 >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >
 >
 __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> --
>>> Regards,
>>> Jeffrey Zhang
>>> Blog: http://xcodest.me
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> David Shrewsbury (Shrews)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
David Shrewsbury (Shrews)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-06-01 Thread Elzur, Uri
Pls note that VXLAN-gpe and VXLAN are using different UDP ports. Lots of 
consideration and discussion went into this working on the IETF draft and on 
ODL implementations. So I'm not sure I follow the backwards compatibility 
issues raised below

In any rate, the Ovs-Eth-NSH could be making parallel progress in the OvS 
community to the path outlined by Tim/Igor where networking-sfc using ODL and a 
backend can support full NSH now

Thx

Uri ("Oo-Ree")
C: 949-378-7568


-Original Message-
From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com] 
Sent: Wednesday, June 1, 2016 11:54 AM
To: OpenStack Development Mailing List (not for usage questions) 
; b...@ovn.org; Yang, Yi Y 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

Looks like the work of removing the mandatory L3 requirement associated with 
decapsulated VxLAN-gpe packet also involves OVS kernel change, which is 
difficult. Furthermore, even this blocking issue is resolved and eventually OVS 
accepts the VLAN-gpe+NSH encapsulation, there is still another issue. 
Current Neutron only supports VXLAN, not VXLAN-gpe, and adopting VXLAN-gpe 
involves consideration of backward compatibility with existing VXLAN VTEP and 
VXLAN Gateway. 

An alternative and maybe easier/faster path could be to push a patch of " VxLAN 
+ Eth + NSH + Original frame" into OVS kernel. This is also IETF compliant 
encapsulation for SFC and does not have the L3 requirement issue and Neutron 
VXLAN-gpe support issue. 

We can probably take this discussion to the OVS mailing alias. 

Thanks,
Cathy

-Original Message-
From: Ben Pfaff [mailto:b...@ovn.org]
Sent: Tuesday, May 31, 2016 9:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On Wed, Jun 01, 2016 at 12:08:23AM +, Yang, Yi Y wrote:
> Ben, yes, we submitted nsh support patch set last year, but ovs 
> community told me we have to push kernel part into Linux kernel tree, 
> we're struggling to do this, but something blocked us from doing this.

It's quite difficult to get patches for a new protocol into the kernel.
You have my sympathy.

> Recently, ovs did some changes in tunnel protocols which requires the 
> packet decapsulated by a tunnel must be a Ethernet packet, but Linux 
> kernel (net-next) tree accepted VxLAN-gpe patch set from Redhat guy 
> (Jiri Benc) which requires the packet decapsulated by VxLAN-gpe port 
> must be L3 packet but not L2 Ethernet packet, this blocked us from 
> progressing better.
> 
> Simon Horman (Netronome guy) has posted a series of patches to remove 
> the mandatory requirement from ovs in order that the packet from a 
> tunnel can be any packet, but so far we didn't see they are merged.

These are slowly working their way through OVS review, but these also have a 
prerequisite on kernel patches, so it's not easy to get them in either.

> I heard ovs community looks forward to getting nsh patches merged, it 
> will be great if ovs guys can help progress this.

I do plan to do my part in review (but much of this is kernel review, which I'm 
not really involved in anymore).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] support of NSH in networking-SFC

2016-06-01 Thread Elzur, Uri
Cathy

So we are ok moving forward on networking-sfc?
What is the next step from your pov?
Thx

Uri ("Oo-Ree")
C: 949-378-7568


-Original Message-
From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com] 
Sent: Wednesday, June 1, 2016 11:58 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] support of NSH in networking-SFC

Igor and Tim,

+1 on your suggestion. 

Thanks,
Cathy

-Original Message-
From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com] 
Sent: Tuesday, May 31, 2016 8:48 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] support of NSH in networking-SFC

Hi Tim,

+1
Focus on the plugin and API while improving the n-sfc<->ODL interaction to 
match that.

In parallel, early (non-merged) support in OVS driver itself could be 
attempted, based on the unofficial April 2016's NSH patches for OVS [1]. After 
official supports gets merged, it would be less troublesome to adapt since the 
big hurdles of mapping the abstraction to OVS would have been mostly overcome.

[1] 
https://github.com/yyang13/ovs_nsh_patches/tree/98e1d3d6b1ed49d902edaede11820853b0ad5037
 
Best regards,
Igor.


-Original Message-
From: Tim Rozet [mailto:tro...@redhat.com] 
Sent: Tuesday, May 31, 2016 4:21 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

Hey Paul,
ODL uses OVS as its dataplane (but is also not limited to just OVS), and ODL 
already supports IETF SFC today in the ODL SFC project.  My point was Neutron 
is no longer in scope of managing OVS, since it is managed by ODL.  I think 
your comments echo the 2 sides of this discussion - whether or not OVS is in 
scope of a protocol implementation in Neutron networking-sfc.  In my opinon it 
is if you consider OVS driver support, but it is not if you consider a 
different networking-sfc driver.

You can implement IETF NSH in the networking-sfc API/DB Model, without caring 
if it is actually supported in OVS (when using ODL as a driver) because all 
networking-sfc cares about should be if it's driver correctly supports SFC.  To 
that end, if you are using ODL as your SFC driver, then absolutely you should 
verify it is an IETF SFC compliant API/model.  However, outside of that scope, 
it is not networking-sfc's responsibility to care what ODL is using as it's 
dataplane backend or for that matter it's version of OVS.  It is now up to ODL 
to manage that for networking-sfc, and networking-sfc just needs to ensure it 
can talk to ODL.  

I think this is a pragmatic way to go, since networking-sfc doesn't yet support 
an ODL driver and we are in the process of adding one.  We could leave the 
networking-sfc OVS driver alone, add support for NSH to the networking-sfc 
plugin, and then only allow API calls that use NSH to work if ODL networking 
driver is the backend.  That way we allow for some experimental NSH support in 
networking-sfc without officially supporting it in the OVS driver until it is 
officially supported in OVS.

Tim Rozet
Red Hat SDN Team

- Original Message -
From: "Paul Carver" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Monday, May 30, 2016 10:12:34 PM
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On 5/25/2016 13:24, Tim Rozet wrote:
> In my opinion, it is a better approach to break this down into plugin vs 
> driver support.  There should be no problem adding support into 
> networking-sfc plugin for NSH today.  The OVS driver however, depends on OVS 
> as the dataplane - which I can see a solid argument for only supporting an 
> official version with a non-NSH solution.  The plugin side should have no 
> dependency on OVS.  Therefore if we add NSH SFC support to an ODL driver in 
> networking-odl, and use that as our networking-sfc driver, the argument about 
> OVS goes away (since neutron/networking-sfc is totally unaware of the 
> dataplane at this point).  We would just need to ensure that API calls to 
> networking-sfc specifying NSH port pairs returned error if the enabled driver 
> was OVS (until official OVS with NSH support is released).
>

Does ODL have a dataplane? I thought it used OvS. Is the ODL project supporting 
its own fork of OvS that has NSH support or is ODL expecting that the user will 
patch OvS themself?

I don't know the details of why OvS hasn't added NSH support so I can't judge 
the validity of the concerns, but one way or another there has to be a 
production-quality dataplane for networking-sfc to front-end.

If ODL has forked OvS or in some other manner is supporting its own NSH capable 
dataplane then it's reasonable to consider that the ODL driver could be the 
first networking-sfc driver to support NSH. However, we 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-01 Thread Hongbin Lu
Personally, I think this is a good idea, since it can address a set of similar 
use cases like below:
* I want to deploy a k8s cluster to 2 availability zone (in future 2 
regions/clouds).
* I want to spin up N nodes in AZ1, M nodes in AZ2.
* I want to scale the number of nodes in specific AZ/region/cloud. For example, 
add/remove K nodes from AZ1 (with AZ2 untouched).

The use case above should be very common and universal everywhere. To address 
the use case, Magnum needs to support provisioning heterogeneous set of nodes 
at deploy time and managing them at runtime. It looks the proposed idea 
(manually managing individual nodes or individual group of nodes) can address 
this requirement very well. Besides the proposed idea, I cannot think of an 
alternative solution.

Therefore, I vote to support the proposed idea.

Best regards,
Hongbin

> -Original Message-
> From: Hongbin Lu
> Sent: June-01-16 11:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Hi team,
> 
> A blueprint was created for tracking this idea:
> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> nodes . I won't approve the BP until there is a team decision on
> accepting/rejecting the idea.
> 
> From the discussion in design summit, it looks everyone is OK with the
> idea in general (with some disagreements in the API style). However,
> from the last team meeting, it looks some people disagree with the idea
> fundamentally. so I re-raised this ML to re-discuss.
> 
> If you agree or disagree with the idea of manually managing the Heat
> stacks (that contains individual bay nodes), please write down your
> arguments here. Then, we can start debating on that.
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > Sent: May-16-16 5:28 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > The discussion at the summit was very positive around this
> requirement
> > but as this change will make a large impact to Magnum it will need a
> > spec.
> >
> > On the API of things, I was thinking a slightly more generic approach
> > to incorporate other lifecycle operations into the same API.
> > Eg:
> > magnum bay-manage  
> >
> > magnum bay-manage  reset –hard
> > magnum bay-manage  rebuild
> > magnum bay-manage  node-delete  magnum bay-manage
> >  node-add –flavor  magnum bay-manage  node-reset
> >  magnum bay-manage  node-list
> >
> > Tom
> >
> > From: Yuanying OTSUKA 
> > Reply-To: "OpenStack Development Mailing List (not for usage
> > questions)" 
> > Date: Monday, 16 May 2016 at 01:07
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > Hi,
> >
> > I think, user also want to specify the deleting node.
> > So we should manage “node” individually.
> >
> > For example:
> > $ magnum node-create —bay …
> > $ magnum node-list —bay
> > $ magnum node-delete $NODE_UUID
> >
> > Anyway, if magnum want to manage a lifecycle of container
> > infrastructure.
> > This feature is necessary.
> >
> > Thanks
> > -yuanying
> >
> >
> > 2016年5月16日(月) 7:50 Hongbin Lu
> > >:
> > Hi all,
> >
> > This is a continued discussion from the design summit. For recap,
> > Magnum manages bay nodes by using ResourceGroup from Heat. This
> > approach works but it is infeasible to manage the heterogeneity
> across
> > bay nodes, which is a frequently demanded feature. As an example,
> > there is a request to provision bay nodes across availability zones
> [1].
> > There is another request to provision bay nodes with different set of
> > flavors [2]. For the request features above, ResourceGroup won’t work
> > very well.
> >
> > The proposal is to remove the usage of ResourceGroup and manually
> > create Heat stack for each bay nodes. For example, for creating a
> > cluster with 2 masters and 3 minions, Magnum is going to manage 6
> Heat
> > stacks (instead of 1 big Heat stack as right now):
> > * A kube cluster stack that manages the global resources
> > * Two kube master stacks that manage the two master nodes
> > * Three kube minion stacks that manage the three minion nodes
> >
> > The proposal might require an additional API endpoint to manage nodes
> > or a group of nodes. For example:
> > $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 --
> > availability-zone us-east-1 ….
> > $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 --
> > availability-zone us-east-2 …
> >
> > Thoughts?
> >
> > [1] https://blueprints.launchpad.net/magnum/+spec/magnum-
> 

[openstack-dev] [Neutron][networking-sfc] meeting topics for 6/1/2016 networking-sfc project IRC meeting

2016-06-01 Thread Cathy Zhang
Hi everyone,
The following link shows the topics I have for this week's project meeting 
discussion(The meeting time is not changed, still UTC 1700 Thursday). Feel free 
to add more.
https://wiki.openstack.org/wiki/Meetings/ServiceFunctionChainingMeeting
Meeting Info: Every Thursday 1700 UTC on #openstack-meeting-4
Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] 8.1.0 released (Mitaka)

2016-06-01 Thread Emilien Macchi
Hi,

We're happy to announce the release of 8.1.0 (Mitaka).
All you need to know is documented here:
http://releases.openstack.org/teams/puppet_openstack.html
You'll also find tarballs and release notes.

Thanks to our group for the hard work!
Regards,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] support of NSH in networking-SFC

2016-06-01 Thread Elzur, Uri
+1 from me too

How do we close on this thread? Is anyone on the ml, NOT cool with this 
approach as outlined by Tim below?

Thx

Uri ("Oo-Ree")
C: 949-378-7568

-Original Message-
From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com] 
Sent: Tuesday, May 31, 2016 8:48 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] support of NSH in networking-SFC

Hi Tim,

+1
Focus on the plugin and API while improving the n-sfc<->ODL interaction to 
match that.

In parallel, early (non-merged) support in OVS driver itself could be 
attempted, based on the unofficial April 2016's NSH patches for OVS [1]. After 
official supports gets merged, it would be less troublesome to adapt since the 
big hurdles of mapping the abstraction to OVS would have been mostly overcome.

[1] 
https://github.com/yyang13/ovs_nsh_patches/tree/98e1d3d6b1ed49d902edaede11820853b0ad5037
 
Best regards,
Igor.


-Original Message-
From: Tim Rozet [mailto:tro...@redhat.com] 
Sent: Tuesday, May 31, 2016 4:21 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

Hey Paul,
ODL uses OVS as its dataplane (but is also not limited to just OVS), and ODL 
already supports IETF SFC today in the ODL SFC project.  My point was Neutron 
is no longer in scope of managing OVS, since it is managed by ODL.  I think 
your comments echo the 2 sides of this discussion - whether or not OVS is in 
scope of a protocol implementation in Neutron networking-sfc.  In my opinon it 
is if you consider OVS driver support, but it is not if you consider a 
different networking-sfc driver.

You can implement IETF NSH in the networking-sfc API/DB Model, without caring 
if it is actually supported in OVS (when using ODL as a driver) because all 
networking-sfc cares about should be if it's driver correctly supports SFC.  To 
that end, if you are using ODL as your SFC driver, then absolutely you should 
verify it is an IETF SFC compliant API/model.  However, outside of that scope, 
it is not networking-sfc's responsibility to care what ODL is using as it's 
dataplane backend or for that matter it's version of OVS.  It is now up to ODL 
to manage that for networking-sfc, and networking-sfc just needs to ensure it 
can talk to ODL.  

I think this is a pragmatic way to go, since networking-sfc doesn't yet support 
an ODL driver and we are in the process of adding one.  We could leave the 
networking-sfc OVS driver alone, add support for NSH to the networking-sfc 
plugin, and then only allow API calls that use NSH to work if ODL networking 
driver is the backend.  That way we allow for some experimental NSH support in 
networking-sfc without officially supporting it in the OVS driver until it is 
officially supported in OVS.

Tim Rozet
Red Hat SDN Team

- Original Message -
From: "Paul Carver" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Monday, May 30, 2016 10:12:34 PM
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On 5/25/2016 13:24, Tim Rozet wrote:
> In my opinion, it is a better approach to break this down into plugin vs 
> driver support.  There should be no problem adding support into 
> networking-sfc plugin for NSH today.  The OVS driver however, depends on OVS 
> as the dataplane - which I can see a solid argument for only supporting an 
> official version with a non-NSH solution.  The plugin side should have no 
> dependency on OVS.  Therefore if we add NSH SFC support to an ODL driver in 
> networking-odl, and use that as our networking-sfc driver, the argument about 
> OVS goes away (since neutron/networking-sfc is totally unaware of the 
> dataplane at this point).  We would just need to ensure that API calls to 
> networking-sfc specifying NSH port pairs returned error if the enabled driver 
> was OVS (until official OVS with NSH support is released).
>

Does ODL have a dataplane? I thought it used OvS. Is the ODL project supporting 
its own fork of OvS that has NSH support or is ODL expecting that the user will 
patch OvS themself?

I don't know the details of why OvS hasn't added NSH support so I can't judge 
the validity of the concerns, but one way or another there has to be a 
production-quality dataplane for networking-sfc to front-end.

If ODL has forked OvS or in some other manner is supporting its own NSH capable 
dataplane then it's reasonable to consider that the ODL driver could be the 
first networking-sfc driver to support NSH. However, we still need to make sure 
that the API is an abstraction, not implementation specific.

But if ODL is not supporting its own NSH capable dataplane, instead expecting 
the user to run a patched OvS that doesn't have upstream acceptance then I 
think we would be building a 

Re: [openstack-dev] [kolla] Ansible 2.0.0 functional

2016-06-01 Thread Jeffrey Zhang
​David,

is there any standard way to create the _tmp path? ansible 2.0 and ansible
2.1 have
not the same function signature. [0]

[0]​
https://github.com/openstack/kolla/blob/master/ansible/action_plugins/merge_configs.py#L45,L54

On Wed, Jun 1, 2016 at 9:53 PM, David Shrewsbury 
wrote:

> I believe 2.1 was when Toshio introduced the new ziploader, which totally
> changed how
> the modules were loaded:
>
> https://github.com/ansible/ansible/pull/15246
>
> That broke a few of the 2.x OpenStack modules, too. But that was mostly
> our fault as
> we didn't code some of them correctly to the proper standards.
>
> -Dave
>
>
> On Wed, Jun 1, 2016 at 6:10 AM, Jeffrey Zhang 
> wrote:
>
>> 1. the ansible 2.1 make lots of change compare to the ansible 2.0 about
>> how
>>the plugin works. So in default, kolla do not work with ansible 2.1
>>
>> 2. the compatible fix is merged here[0]. So the kolla works on both
>>ansible 2.1 and ansible 2.0
>>
>> [0] https://review.openstack.org/321754
>>
>> On Wed, Jun 1, 2016 at 2:46 PM, Monty Taylor 
>> wrote:
>>
>>> On 06/01/2016 08:44 AM, Joshua Harlow wrote:
>>> > Out of curiosity, what keeps on changing (breaking?) in ansible that
>>> > makes it so that something working in 2.0 doesn't work in 2.1? Isn't
>>> the
>>> > point of minor version numbers like that so that things in the same
>>> > major version number still actually work...
>>>
>>> I'm also curious to know the answer to this. I expect the 2.0 port to
>>> have had the possibility of breaking things - I do not expect 2.0 to 2.1
>>> to be similar. Which is not to say you're wrong about it not working,
>>> but rather, it would be good to understand what broke so that we can
>>> better track it in upstream ansible.
>>>
>>> > Steven Dake (stdake) wrote:
>>> >> Hey folks,
>>> >>
>>> >> In case you haven't been watching the review queue, Kolla has been
>>> >> ported to Ansible 2.0. It does not work with Ansible 2.1, however.
>>> >>
>>> >> Regards,
>>> >> -steve
>>> >>
>>> >>
>>> __
>>> >>
>>> >> OpenStack Development Mailing List (not for usage questions)
>>> >> Unsubscribe:
>>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Regards,
>> Jeffrey Zhang
>> Blog: http://xcodest.me
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> David Shrewsbury (Shrews)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-06-01 Thread Elzur, Uri
Hi Ben

Any guidance you can offer will be appreciated. The process has taken long time 
and precious cycles. How can we get to a coordinated Kernel and OvS approach to 
avoid the challenges and potentially misaligned advise we got (per Yi Yang's 
mail)?

Thx

Uri ("Oo-Ree")
C: 949-378-7568


-Original Message-
From: Ben Pfaff [mailto:b...@ovn.org] 
Sent: Tuesday, May 31, 2016 9:48 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On Wed, Jun 01, 2016 at 12:08:23AM +, Yang, Yi Y wrote:
> Ben, yes, we submitted nsh support patch set last year, but ovs 
> community told me we have to push kernel part into Linux kernel tree, 
> we're struggling to do this, but something blocked us from doing this.

It's quite difficult to get patches for a new protocol into the kernel.
You have my sympathy.

> Recently, ovs did some changes in tunnel protocols which requires the 
> packet decapsulated by a tunnel must be a Ethernet packet, but Linux 
> kernel (net-next) tree accepted VxLAN-gpe patch set from Redhat guy 
> (Jiri Benc) which requires the packet decapsulated by VxLAN-gpe port 
> must be L3 packet but not L2 Ethernet packet, this blocked us from 
> progressing better.
> 
> Simon Horman (Netronome guy) has posted a series of patches to remove 
> the mandatory requirement from ovs in order that the packet from a 
> tunnel can be any packet, but so far we didn't see they are merged.

These are slowly working their way through OVS review, but these also have a 
prerequisite on kernel patches, so it's not easy to get them in either.

> I heard ovs community looks forward to getting nsh patches merged, it 
> will be great if ovs guys can help progress this.

I do plan to do my part in review (but much of this is kernel review, which I'm 
not really involved in anymore).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-06-01 Thread Joshua Harlow

Interesting way to combine taskflow + celery.

I didn't expect it to be used like this, but the more power to you!

Taskflow itself has some similar capabilities via 
http://docs.openstack.org/developer/taskflow/workers.html#design but 
anyway what u've done is pretty neat as well.


I am assuming this isn't an openstack project (due to usage of celery), 
any details on what's being worked on (am curious here)?


pnkk wrote:

Thanks for the nice documentation.

To my knowledge celery is widely used for distributed task processing.
This fits our requirement perfectly where we want to return immediate
response to the user from our API server and run long running task in
background. Celery also gives flexibility with the worker
types(process(can overcome GIL problems too)/evetlet...) and it also
provides nice message brokers(rabbitmq,redis...)

We used both celery and taskflow for our core processing to leverage the
benefits of both. Taskflow provides nice primitives like(execute,
revert, pre,post stuf) which takes off the load from the application.

As far as the actual issue is concerned, I found one way to solve it by
using celery "retry" option. This along with late_acks makes the
application highly fault tolerant.

http://docs.celeryproject.org/en/latest/faq.html#faq-acks-late-vs-retry

Regards,
Kanthi


On Sat, May 28, 2016 at 1:51 AM, Joshua Harlow > wrote:

Seems like u could just use
http://docs.openstack.org/developer/taskflow/jobs.html (it appears
that you may not be?); the job itself would when failed be then
worked on by a different job consumer.

Have u looked at those? It almost appears that u are using celery as
a job distribution system (similar to the jobs.html link mentioned
above)? Is that somewhat correct (I haven't seen anyone try this,
wondering how u are using it and the choices that directed u to
that, aka, am curious)?

-Josh

pnkk wrote:

To be specific, we hit this issue when the node running our
service is
rebooted.
Our solution is designed in a way that each and every job is a
celery
task and inside celery task, we create taskflow flow.

We enabled late_acks in celery(uses rabbitmq as message broker),
so if
our service/node goes down, other healthy service can pick the
job and
completes it.
This works fine, but we just hit this rare case where the node was
rebooted just when taskflow is updating something to the database.

In this case, it raises an exception and the job is marked
failed. Since
it is complete(with failure), message is removed from the
rabbitmq and
other worker would not be able to process it.
Can taskflow handle such I/O errors gracefully or should
application try
to catch this exception? If application has to handle it what would
happen to that particular database transaction which failed just
when
the node is rebooted? Who will retry this transaction?

Thanks,
Kanthi

On Fri, May 27, 2016 at 5:39 PM, pnkk 
>> wrote:

 Hi,

 When taskflow engine is executing a job, the execution
failed due to
 IO error(traceback pasted below).

 2016-05-25 19:45:21.717 7119 ERROR
 taskflow.engines.action_engine.engine 127.0.1.1 [-]  Engine
 execution has failed, something bad must of happened (last 10
 machine transitions were [('SCHEDULING', 'WAITING'),
('WAITING',
'ANALYZING'), ('ANALYZING', 'SCHEDULING'), ('SCHEDULING',
'WAITING'), ('WAITING', 'ANALYZING'), ('ANALYZING', 'SCHEDULING'),
 ('SCHEDULING', 'WAITING'), ('WAITING', 'ANALYZING'),
('ANALYZING',
'GAME_OVER'), ('GAME_OVER', 'FAILURE')])
 2016-05-25 19:45:21.717 7119 TRACE
 taskflow.engines.action_engine.engine Traceback (most
recent call last):
 2016-05-25 19:45:21.717 7119 TRACE
 taskflow.engines.action_engine.engine   File

"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py",
 line 269, in run_iter
 2016-05-25 19:45:21.717 7119 TRACE
 taskflow.engines.action_engine.engine
 failure.Failure.reraise_if_any(memory.failures)
 2016-05-25 19:45:21.717 7119 TRACE
 taskflow.engines.action_engine.engine   File

"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/types/failure.py",
 line 336, in reraise_if_any
 2016-05-25 19:45:21.717 7119 TRACE
 

[openstack-dev] [heat][swift] New TC Resolutions on DefCore Tests

2016-06-01 Thread Mark Voelker
Hi Everyone,

At today’s DefCore Committee meeting, we discussed a couple of newly-approved 
TC resolutions and wanted to take a quick moment to make folks aware of them in 
case they weren’t already.  These new resolutions may impact what capabilities 
and tests projects ask to have included in future DefCore Guidelines:

2016-05-04 Recommendation on API Proxy Tests for DefCore
https://governance.openstack.org/resolutions/20160504-defcore-proxy-tests.html
https://review.openstack.org/312719

2016-05-04 Recommendation on Location of Tests for DefCore
https://governance.openstack.org/resolutions/20160504-defcore-test-location.html
https://review.openstack.org/312718

The latter resolution is probably one that will be of the most interest to 
projects who are looking to add new Capabilities to DefCore Guidelines.  
RefStack has been able to handle tests that live in Tempest or that live in 
project trees but are use the Tempest plugin interface for some time now, and 
the DefCore Committee has generally guided project teams that either was 
acceptable.  In the new resolution, the TC "encourages the DefCore committee to 
consider it an indication of future technical direction that we do not want 
tests outside of the Tempest repository used for trademark enforcement, and 
that any new or existing tests that cover capabilities they want to consider 
for trademark enforcement should be placed in Tempest.  Project teams should 
work with the DefCore committee to move any existing tests that need to move as 
a result of this policy.”  

At present, I don’t think any tests in existing Board-approved DefCore 
Guidelines will need to move as a result of this resolution—however I am aware 
of a few teams that were interested in having in-project-tree tests used in the 
future (hence I’ve added [heat] and [swift] to the subject line).  Hopefully 
those folks were already aware of the new resolution and are making plans 
accordingly, but we thought it would be best to send out a quick communiqué 
just to be sure since this is a change in guidance since the last Summit.  As a 
reminder, our next round of identifying new candidate Capabilities won’t begin 
for a couple of months yet [2], so there’s some time for project teams to 
discuss what (if any) actions they wish to take.

[1] 
http://eavesdrop.openstack.org/meetings/defcore/2016/defcore.2016-06-01-16.00.html
[2] 
http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/2015B.rst#n10

At Your Service,

Mark T. Voelker



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] prototype of a DSL for generating Dockerfiles

2016-06-01 Thread Michał Jastrzębski
Aaaand correct link: https://review.openstack.org/#/c/323589/ sorry
for pastefail.

On 1 June 2016 at 15:55, Michał Jastrzębski  wrote:
> So this is prototype of working template overrides:
> https://review.openstack.org/#/c/323612/
>
> Pass --template-overrides=path-to-file to build.py
> in file override you can add any custom code/logic/dockerfile stuff to
> any of hooks we provide in Dockerfiles, and we'll provide a lot of
> them as it's free and non breaking operation. With enough block you'll
> be able to do virtually anything with any of the containers.
>
> This one is already working. Only work needed is to provide more
> hooks/continue with refactoring of dockerfiles.
>
> Cheers,
> Michal
>
> On 31 May 2016 at 19:36, Steven Dake (stdake)  wrote:
>>
>>
>> On 5/31/16, 1:42 PM, "Michał Jastrzębski"  wrote:
>>
>>>I am opposed to this idea as I don't think we need this. We can solve
>>>many problems by using jinja2 to greater extend. I'll publish demo of
>>>few improvements soon, please bear with me before we make a
>>>arch-changing call.
>>
>> Can you make a specification please as you have asked me to do?
>>
>>>
>>>On 29 May 2016 at 14:41, Steven Dake (stdake)  wrote:

>On 5/27/16, 1:58 AM, "Steven Dake (stdake)"  wrote:
>
>>
>>
>>On 5/26/16, 8:45 PM, "Swapnil Kulkarni (coolsvap)" 
>>wrote:
>>
>>>On Fri, May 27, 2016 at 8:35 AM, Steven Dake (stdake)
>>>
>>>wrote:
 Hey folks,

 While Swapnil has been busy churning the dockerfile.j2 files to all
match
 the same style, and we also had summit where we declared we would
solve
the
 plugin problem, I have decided to begin work on a DSL prototype.

 Here are the problems I want to solve in order of importance by this
work:

 Build CentOS, Ubuntu, Oracle Linux, Debian, Fedora containers
 Provide a programmatic way to manage Dockerfile construction rather
then a
 manual (with vi or emacs or the like) mechanism
 Allow complete overrides of every facet of Dockerfile construction,
most
 especially repositories per container (rather than in the base
container) to
 permit the use case of dependencies from one version with
dependencies
in
 another version of a different service
 Get out of the business of maintaining 100+ dockerfiles but instead
maintain
 one master file which defines the data that needs to be used to
construct
 Dockerfiles
 Permit different types of optimizations or Dockerfile building by
changing
 around the parser implementation ­ to allow layering of each
operation,
or
 alternatively to merge layers as we do today

 I don't believe we can proceed with both binary and source plugins
given our
 current implementation of Dockerfiles in any sane way.

 I further don't believe it is possible to customize repositories &
installed
 files per container, which I receive increasing requests for
offline.

 To that end, I've created a very very rough prototype which builds
the
base
 container as well as a mariadb container.  The mariadb container
builds
and
 I suspect would work.

 An example of the DSL usage is here:
 https://review.openstack.org/#/c/321468/4/dockerdsl/dsl.yml

 A very poorly written parser is here:
 https://review.openstack.org/#/c/321468/4/dockerdsl/load.py

 I played around with INI as a format, to take advantage of
oslo.config
and
 kolla-build.conf, but that didn't work out.  YML is the way to go.

 I'd appreciate reviews on the YML implementation especially.

 How I see this work progressing is as follows:

 A yml file describing all docker containers for all distros is
placed
in
 kolla/docker
 The build tool adds an option ‹use-yml which uses the YML file
 A parser (such as load.py above) is integrated into build.py to lay
down he
 Dockerfiles
 Wait 4-6 weeks for people to find bugs and complain
 Make the ‹use-yml the default for 4-6 weeks
 Once we feel confident in the yml implementation, remove all
Dockerfile.j2
 files
 Remove ‹use-yml option
 Remove all jinja2-isms from build.py

 This is similar to the work that took place to convert from raw
Dockerfiles
 to Dockerfile.j2 files.  We are just reusing that pattern.
Hopefully
this
 will be the last major 

Re: [openstack-dev] [kolla] prototype of a DSL for generating Dockerfiles

2016-06-01 Thread Michał Jastrzębski
So this is prototype of working template overrides:
https://review.openstack.org/#/c/323612/

Pass --template-overrides=path-to-file to build.py
in file override you can add any custom code/logic/dockerfile stuff to
any of hooks we provide in Dockerfiles, and we'll provide a lot of
them as it's free and non breaking operation. With enough block you'll
be able to do virtually anything with any of the containers.

This one is already working. Only work needed is to provide more
hooks/continue with refactoring of dockerfiles.

Cheers,
Michal

On 31 May 2016 at 19:36, Steven Dake (stdake)  wrote:
>
>
> On 5/31/16, 1:42 PM, "Michał Jastrzębski"  wrote:
>
>>I am opposed to this idea as I don't think we need this. We can solve
>>many problems by using jinja2 to greater extend. I'll publish demo of
>>few improvements soon, please bear with me before we make a
>>arch-changing call.
>
> Can you make a specification please as you have asked me to do?
>
>>
>>On 29 May 2016 at 14:41, Steven Dake (stdake)  wrote:
>>>
On 5/27/16, 1:58 AM, "Steven Dake (stdake)"  wrote:

>
>
>On 5/26/16, 8:45 PM, "Swapnil Kulkarni (coolsvap)" 
>wrote:
>
>>On Fri, May 27, 2016 at 8:35 AM, Steven Dake (stdake)
>>
>>wrote:
>>> Hey folks,
>>>
>>> While Swapnil has been busy churning the dockerfile.j2 files to all
>>>match
>>> the same style, and we also had summit where we declared we would
>>>solve
>>>the
>>> plugin problem, I have decided to begin work on a DSL prototype.
>>>
>>> Here are the problems I want to solve in order of importance by this
>>>work:
>>>
>>> Build CentOS, Ubuntu, Oracle Linux, Debian, Fedora containers
>>> Provide a programmatic way to manage Dockerfile construction rather
>>>then a
>>> manual (with vi or emacs or the like) mechanism
>>> Allow complete overrides of every facet of Dockerfile construction,
>>>most
>>> especially repositories per container (rather than in the base
>>>container) to
>>> permit the use case of dependencies from one version with
>>>dependencies
>>>in
>>> another version of a different service
>>> Get out of the business of maintaining 100+ dockerfiles but instead
>>>maintain
>>> one master file which defines the data that needs to be used to
>>>construct
>>> Dockerfiles
>>> Permit different types of optimizations or Dockerfile building by
>>>changing
>>> around the parser implementation ­ to allow layering of each
>>>operation,
>>>or
>>> alternatively to merge layers as we do today
>>>
>>> I don't believe we can proceed with both binary and source plugins
>>>given our
>>> current implementation of Dockerfiles in any sane way.
>>>
>>> I further don't believe it is possible to customize repositories &
>>>installed
>>> files per container, which I receive increasing requests for
>>>offline.
>>>
>>> To that end, I've created a very very rough prototype which builds
>>>the
>>>base
>>> container as well as a mariadb container.  The mariadb container
>>>builds
>>>and
>>> I suspect would work.
>>>
>>> An example of the DSL usage is here:
>>> https://review.openstack.org/#/c/321468/4/dockerdsl/dsl.yml
>>>
>>> A very poorly written parser is here:
>>> https://review.openstack.org/#/c/321468/4/dockerdsl/load.py
>>>
>>> I played around with INI as a format, to take advantage of
>>>oslo.config
>>>and
>>> kolla-build.conf, but that didn't work out.  YML is the way to go.
>>>
>>> I'd appreciate reviews on the YML implementation especially.
>>>
>>> How I see this work progressing is as follows:
>>>
>>> A yml file describing all docker containers for all distros is
>>>placed
>>>in
>>> kolla/docker
>>> The build tool adds an option ‹use-yml which uses the YML file
>>> A parser (such as load.py above) is integrated into build.py to lay
>>>down he
>>> Dockerfiles
>>> Wait 4-6 weeks for people to find bugs and complain
>>> Make the ‹use-yml the default for 4-6 weeks
>>> Once we feel confident in the yml implementation, remove all
>>>Dockerfile.j2
>>> files
>>> Remove ‹use-yml option
>>> Remove all jinja2-isms from build.py
>>>
>>> This is similar to the work that took place to convert from raw
>>>Dockerfiles
>>> to Dockerfile.j2 files.  We are just reusing that pattern.
>>>Hopefully
>>>this
>>> will be the last major refactor of the dockerfiles unless someone
>>>has
>>>some
>>> significant complaints about the approach.
>>>
>>> Regards
>>> -steve
>
> Hey folks,
>
> I have produced a specification for Kolla's DSL (which I call Elemental).
> The spec is ready for review here:
> 

Re: [openstack-dev] [oslo] [keystone] rolling dogpile.core into dogpile.cache, removing namespace packaging (PLEASE REVIEW)

2016-06-01 Thread Mike Bayer
Just a reminder, dogpile.cache is doing away with namespace packaging in 
version 0.6.0, due for the end of this week or sometime next week. 
dogpile.core is being retired and left as-is.   No changes should be 
needed by anyone using only dopgile.cache.




On 05/30/2016 06:17 PM, Mike Bayer wrote:

Hi all -

Just a heads up what's happening for dogpile.cache, in version 0.6.0 we
are rolling the functionality of the dogpile.core package into
dogpile.cache itself, and retiring the use of namespace package naming
for dogpile.cache.

Towards retiring the use of namespace packaging, the magic
"declare_namespace() / extend_path()" logic is being removed from the
file dogpile/__init__.py from dogpile.cache, and the "namespace_package"
directive being removed from setup.py.

However, currently, the plan is to leave alone entirely the
"dogpile.core" package as is, and to no longer use the name
"dogpile.core" within dogpile.cache at all; the constructs that it
previously imported from "dogpile.core" it now just imports from
"dogpile" and "dogpile.util" from within the dogpile.cache package.

The caveat here is that Python environments that have dogpile.cache
0.5.7 or earlier installed will also have dogpile.core 0.4.1 installed
as well, and dogpile.core *does* still contain the namespace package
verbiage as before.   From our testing, we don't see there being any
problem with this, however, I know there are people on this list who are
vastly more familiar than I am with namespace packaging and I would
invite them to comment on this as well as on the gerrit review [1] (the
gerrit invites anyone with a Github account to register and comment).

Note that outside of the Openstack world, there are a very small number
of applications that make use of dopgile.core directly.  From our
grepping we can find no mentions of "dogpile.core" in any Openstack
requirements files.For these applications, if a Python environment
already has dogpile.core installed, this would continue to be used;
however dogpile.cache also includes a file dogpile/core.py which sets up
a compatible namespace, so that applications which list only
dogpile.cache in their requirements but make use of "dogpile.core"
constructs will continue to work as before.

I would ask that anyone reading this to please alert me to anyone, any
project, or any announcement medium which may be necessary in order to
ensure that anyone who needs to be made aware of these changes are aware
of them and have vetted them ahead of time.   I would like to release
dogpile.cache 0.6.0 by the end of the week if possible.  I will send
this email a few more times to the list to make sure that it is seen.


[1] https://gerrit.sqlalchemy.org/#/c/89/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-06-01 Thread Matt Riedemann

On 5/31/2016 7:35 AM, Sean Dague wrote:

On 05/30/2016 04:02 PM, Shoham Peller wrote:

I support Clint's comment, and as an example, only today I was able to
search a bug and to see it was reported 2 years ago and wasn't solved since.
I've commented on the bug saying it happened to me in an up-to-date nova.
I'm talking about a bug which is on your list -
https://bugs.launchpad.net/nova/+bug/1298075

I guess I wouldn't
 been able to do so if the bug was closed.


A closed bug still shows up in the search, and if you try to report a
bug. So you'd still see in in reporting.

That bug is actually a classic instance of something which shouldn't be
in the bug tracker. It's a known issue of all of OpenStack and
Keystone's token architecture. It requires a bunch of Keystone feature
work to be addressed.

Having a more public "Known Issues in OpenStack" googlable page might be
way more appropriate for this so we don't spend a ton of time
duplicating issues into these buckets.

-Sean



Heh, I opened that bug 2 years ago. And it's a duplicate of several bugs 
at this point and has a fix available, so I've marked it a duplicate of 
the bug that has the fix.


The main point of removing this old stuff is so we can actually see new 
things when doing triage, since we at least have some consistent triage 
for new bugs going on now.


Anyway, it's a drop in the bucket. A bug that's two years old with no 
one working on it to me means, meh, it's either not important or if 
someone cares and doesn't find it, they'll report a new bug (which 
should get triaged) and provide a fix if they care enough.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-06-01 Thread pnkk
Thanks for the nice documentation.

To my knowledge celery is widely used for distributed task processing. This
fits our requirement perfectly where we want to return immediate response
to the user from our API server and run long running task in background.
Celery also gives flexibility with the worker types(process(can overcome
GIL problems too)/evetlet...) and it also provides nice message
brokers(rabbitmq,redis...)

We used both celery and taskflow for our core processing to leverage the
benefits of both. Taskflow provides nice primitives like(execute, revert,
pre,post stuf) which takes off the load from the application.

As far as the actual issue is concerned, I found one way to solve it by
using celery "retry" option. This along with late_acks makes the
application highly fault tolerant.

http://docs.celeryproject.org/en/latest/faq.html#faq-acks-late-vs-retry

Regards,
Kanthi


On Sat, May 28, 2016 at 1:51 AM, Joshua Harlow 
wrote:

> Seems like u could just use
> http://docs.openstack.org/developer/taskflow/jobs.html (it appears that
> you may not be?); the job itself would when failed be then worked on by a
> different job consumer.
>
> Have u looked at those? It almost appears that u are using celery as a job
> distribution system (similar to the jobs.html link mentioned above)? Is
> that somewhat correct (I haven't seen anyone try this, wondering how u are
> using it and the choices that directed u to that, aka, am curious)?
>
> -Josh
>
> pnkk wrote:
>
>> To be specific, we hit this issue when the node running our service is
>> rebooted.
>> Our solution is designed in a way that each and every job is a celery
>> task and inside celery task, we create taskflow flow.
>>
>> We enabled late_acks in celery(uses rabbitmq as message broker), so if
>> our service/node goes down, other healthy service can pick the job and
>> completes it.
>> This works fine, but we just hit this rare case where the node was
>> rebooted just when taskflow is updating something to the database.
>>
>> In this case, it raises an exception and the job is marked failed. Since
>> it is complete(with failure), message is removed from the rabbitmq and
>> other worker would not be able to process it.
>> Can taskflow handle such I/O errors gracefully or should application try
>> to catch this exception? If application has to handle it what would
>> happen to that particular database transaction which failed just when
>> the node is rebooted? Who will retry this transaction?
>>
>> Thanks,
>> Kanthi
>>
>> On Fri, May 27, 2016 at 5:39 PM, pnkk > > wrote:
>>
>> Hi,
>>
>> When taskflow engine is executing a job, the execution failed due to
>> IO error(traceback pasted below).
>>
>> 2016-05-25 19:45:21.717 7119 ERROR
>> taskflow.engines.action_engine.engine 127.0.1.1 [-]  Engine
>> execution has failed, something bad must of happened (last 10
>> machine transitions were [('SCHEDULING', 'WAITING'), ('WAITING',
>> 'ANALYZING'), ('ANALYZING', 'SCHEDULING'), ('SCHEDULING',
>> 'WAITING'), ('WAITING', 'ANALYZING'), ('ANALYZING', 'SCHEDULING'),
>> ('SCHEDULING', 'WAITING'), ('WAITING', 'ANALYZING'), ('ANALYZING',
>> 'GAME_OVER'), ('GAME_OVER', 'FAILURE')])
>> 2016-05-25 19:45:21.717 7119 TRACE
>> taskflow.engines.action_engine.engine Traceback (most recent call
>> last):
>> 2016-05-25 19:45:21.717 7119 TRACE
>> taskflow.engines.action_engine.engine   File
>>
>> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py",
>> line 269, in run_iter
>> 2016-05-25 19:45:21.717 7119 TRACE
>> taskflow.engines.action_engine.engine
>> failure.Failure.reraise_if_any(memory.failures)
>> 2016-05-25 19:45:21.717 7119 TRACE
>> taskflow.engines.action_engine.engine   File
>>
>> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/types/failure.py",
>> line 336, in reraise_if_any
>> 2016-05-25 19:45:21.717 7119 TRACE
>> taskflow.engines.action_engine.engine failures[0].reraise()
>> 2016-05-25 19:45:21.717 7119 TRACE
>> taskflow.engines.action_engine.engine   File
>>
>> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/types/failure.py",
>> line 343, in reraise
>> 2016-05-25 19:45:21.717 7119 TRACE
>> taskflow.engines.action_engine.engine six.reraise(*self._exc_info)
>> 2016-05-25 19:45:21.717 7119 TRACE
>> taskflow.engines.action_engine.engine   File
>>
>> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/scheduler.py",
>> line 94, in schedule
>> 2016-05-25 19:45:21.717 7119 TRACE
>> taskflow.engines.action_engine.engine
>> futures.add(scheduler.schedule(atom))
>> 2016-05-25 19:45:21.717 7119 TRACE
>> 

Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-06-01 Thread Ryan Moats
John McDowall  wrote on 05/31/2016 07:57:02
PM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff , "disc...@openvswitch.org"
> , Justin Pettit ,
> "OpenStack Development Mailing List"  d...@lists.openstack.org>, Russell Bryant 
> Date: 05/31/2016 07:57 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> More help is always great :-). As far as who to collaborate, what
> ever Is easiest for everyone – I am pretty flexible.
>
> Regards
>
> John

Ok, then I'll ask that we go the submit WIP patches to each of
networking-sfc and networking-ovn and an RFC patch to d...@openvswitch.org
and iterate through review.openstack.org and patchworks.

Could you submit the initial patches today or tomorrow? I'd rather
go that route since you have the lion's share of the work so far
and credit where credit is due...

Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Kilo end of life

2016-06-01 Thread Matt Riedemann
FYI, I've got some changes up to project-config and the releases repo to 
mark kilo as end of life [1]. Andreas has done the same for the manuals.


The project-config changes remove the kilo periodic job definitions and 
usage from all repos, regardless of whether or not they are managed by 
the stable-maint-team or not, e.g. murano.


The projects which don't have a kilo-eol tag or final release from Dave 
Walker should do that using their own release process, however they have 
done stable branch EOL in the past, like for Juno.


[1] https://review.openstack.org/#/q/topic:kilo-eol

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][all] Cleaning up Global Requirements

2016-06-01 Thread Davanum Srinivas
Team,

Please see the following reviews, If you see a package being dropped
and you need it, please chime in ASAP:
https://review.openstack.org/#/q/status:open+project:openstack/requirements+branch:master+topic:remove-crufts

if your project needs it and your project does not follow requirements
process then you should file appropriate reviews as mentioned in the
email from a few weeks ago.
http://markmail.org/message/kb6jlhiuhmxea454

If you do not file reviews to follow the g-r/u-c process, it would be
very difficult for us to help you in the future when things start
failing in the CI jobs, so please make it a priority for your project
to do so.

Any questions, please find us on #openstack-requirements.

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][manila] Moving forward with landing manila in tripleo

2016-06-01 Thread Ben Swartzlander
I think it makes sense to merge the triplo heat templates without m-dat 
support, as including m-dat will require a bunch of dependent patches 
and slow everything down. The lack of the m-dat service won't cause any 
issues other than that the experimental share-migration APIs won't work. 
We should of course start work on all of those other patches 
immediately, and add a follow-on patch to add m-dat support to tripleo.


One other thing -- the design of the m-dat service currently doesn't 
lend it to HA or scale-out configurations, but the whole point of 
creating this separate service is to provide for HA and scale out of 
data-oriented operations, so I expect that by the end of Newton any 
issues related to active/active m-dat will have been addressed.


-Ben


On 05/30/2016 04:48 AM, Marios Andreou wrote:

On 27/05/16 22:46, Rodrigo Barbieri wrote:

Hello Marios,



Hi Rodrigo, thanks very much for taking the time, indeed that clarifies
quite a lot:


The Data Service is needed for Share Migration feature in manila since the
Mitaka release.

There has not been any work done yet towards adding it to puppet. Since its
introduction in Mitaka, it has been made compatible only with devstack so
far.


I see so at least that confirms I didn't just miss it at puppet-manila
... so that is a prerequisite really to us being able to configure and
enable manila-data in tripleo and someone will have to look at doing that.



I have not invested time thinking about how it should fit in a HA
environment at this stage, this is a service that currently sports a single
instance, but we have plans to make it more scallable in the future.
What I have briefly thought about is the idea where there would be a
scheduler that decides whether to send the data job to m-dat1, m-dat2 or
m-dat3 and so on, based on information that indicates how busy each Data
Service instance is.

For this moment, active/passive makes sense in the context that manila
expects only a single instance of m-dat. But active/active would allow the
service to be load balanced through HAProxy and could partially accomplish
what we have plans to achieve in the future.


OK thanks, so we can proceed with a/p for manila-share and manila-data
(one thought below) for now and revisit once you've worked out the
details there.



I hope I have addressed your question. The absence of m-dat implies in the
Share Migration feature not working.



thanks for the clarification. So then I wonder if this is a feature we
can live w/out for now, especially if this is an obstacle to landing
manila-anything in tripleo. I mean, if we can live w/out the Share
Migration Feature, until we get proper support for configuring
manila-data landed, then lets land w/out manila data and just be really
clear about what is going on, manila-data pending etc.

thanks again, marios





Regards,

On Fri, May 27, 2016 at 10:10 AM, Marios Andreou  wrote:


Hi all, I explicitly cc'd a few folks I thought might be interested for
visibility, sorry for spam if you're not. This email is about getting
manila landed into tripleo asap, and the current obstacles to that (at
least those visible to me):

The current review [1] isn't going to land as is, regardless of the
outcome/discussion of any of the following points because all the
services are going to "composable controller services". How do people
feel about me merging my review at [2] into its parent review (which is
the current manilla review at [1]). My review just takes what is in  [1]
(caveats below) and makes it 'composable', and includes a dependency on
[3] which is the puppet-tripleo side for the 'composable manila'.

   ---> Proposal merge the 'composable manila' tripleo-heat-templates
review @ [2] into the parent review @ [1]. The review at [2] will be
abandoned. We will continue to try and land [1] in its new 'composable
manila' form.

WRT the 'caveats' mentioned above and why I haven't just just ported
what is in the current manila review @ [1] into the composable one @
[2]... there are two main things I've changed, both of which on
guidance/discussion on the reviews.

The first is addition of manila-data (wasn't in the original/current
review at [1]). The second a change to the pacemaker constraints, which
I've corrected to make manila-data and manila-share pacemaker a/p but
everything else systemd managed, based on ongoing discussion at [3].

So IMO to move forward I need clarity on both those points. For
manila-data my concerns are is it already available where we need it. I
looked at puppet-manila [4] and couldn't quickly find much (any) mention
of manila-data. We need it there if we are to configure anything for it
via puppet. The other unkown/concern here is does manila-data get
delivered with the manila package (I recall manila-share possibly, at
least one of them, had a stand-alone package) otherwise we'll need to
add it to the image. But mainly my question here is, can we live without
it? I mean can we deploy 

Re: [openstack-dev] [nova] API changes on limit / marker / sort in Newton

2016-06-01 Thread Matt Riedemann

On 5/31/2016 7:38 AM, Sean Dague wrote:

On 05/30/2016 10:05 PM, Zhenyu Zheng wrote:

I think it is good to share codes and a single microversion can make
life more easier during coding.
Can we approve those specs first and then decide on the details in IRC
and patch review? Because
the non-priority spec deadline is so close.

Thanks

On Tue, May 31, 2016 at 1:09 AM, Ken'ichi Ohmichi > wrote:

2016-05-29 19:25 GMT-07:00 Alex Xu >:
>
>
> 2016-05-20 20:05 GMT+08:00 Sean Dague >:
>>
>> There are a number of changes up for spec reviews that add parameters to
>> LIST interfaces in Newton:
>>
>> * keypairs-pagination (MERGED) -
>>
>> 
https://github.com/openstack/nova-specs/blob/8d16fc11ee6d01b5a9fe1b8b7ab7fa6dff460e2a/specs/newton/approved/keypairs-pagination.rst#L2
>> * os-instances-actions - https://review.openstack.org/#/c/240401/
>> * hypervisors - https://review.openstack.org/#/c/240401/
>> * os-migrations - https://review.openstack.org/#/c/239869/
>>
>> I think that limit / marker is always a legit thing to add, and I almost
>> wish we just had a single spec which is "add limit / marker to the
>> following APIs in Newton"
>>
>
> Are you looking for code sharing or one microversion? For code sharing, it
> sounds ok if people have some co-work. Probably we need a common 
pagination
> supported model_query function for all of those. For one microversion, 
i'm a
> little hesitate, we should keep one small change, or enable all in one
> microversion. But if we have some base code for pagination support, we
> probably can make the pagination as default thing support for all list
> method?

It is nice to share some common code for this, that would be nice for
writing the api doc also to know what APIs support them.
And also nice to do it with a single microversion for the above
resources, because we can avoid microversion bumping conflict and all
of them don't seem a big change.


There is already common code for limit / marker.

I don't think these all need to be one microversion, they are honestly
easier to review if they are not.

However in future we should probably make 1 spec for all limit / marker
adds during a cycle. Just because the answer will be *yes* and seems
like more work to have everything be a dedicated spec.

-Sean



Agree with Sean, I'd prefer separate microversions since it makes 
getting these in easier since they are easier to review (and remember we 
make changes to python-novaclient for each of these also).


Also agree with using a single spec in the future, like Sean did with 
the API deprecation spec - deprecating multiple APIs but a single spec 
since the changes are the same.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] support of NSH in networking-SFC

2016-06-01 Thread Cathy Zhang
Igor and Tim,

+1 on your suggestion. 

Thanks,
Cathy

-Original Message-
From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com] 
Sent: Tuesday, May 31, 2016 8:48 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] support of NSH in networking-SFC

Hi Tim,

+1
Focus on the plugin and API while improving the n-sfc<->ODL interaction to 
match that.

In parallel, early (non-merged) support in OVS driver itself could be 
attempted, based on the unofficial April 2016's NSH patches for OVS [1]. After 
official supports gets merged, it would be less troublesome to adapt since the 
big hurdles of mapping the abstraction to OVS would have been mostly overcome.

[1] 
https://github.com/yyang13/ovs_nsh_patches/tree/98e1d3d6b1ed49d902edaede11820853b0ad5037
 
Best regards,
Igor.


-Original Message-
From: Tim Rozet [mailto:tro...@redhat.com] 
Sent: Tuesday, May 31, 2016 4:21 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

Hey Paul,
ODL uses OVS as its dataplane (but is also not limited to just OVS), and ODL 
already supports IETF SFC today in the ODL SFC project.  My point was Neutron 
is no longer in scope of managing OVS, since it is managed by ODL.  I think 
your comments echo the 2 sides of this discussion - whether or not OVS is in 
scope of a protocol implementation in Neutron networking-sfc.  In my opinon it 
is if you consider OVS driver support, but it is not if you consider a 
different networking-sfc driver.

You can implement IETF NSH in the networking-sfc API/DB Model, without caring 
if it is actually supported in OVS (when using ODL as a driver) because all 
networking-sfc cares about should be if it's driver correctly supports SFC.  To 
that end, if you are using ODL as your SFC driver, then absolutely you should 
verify it is an IETF SFC compliant API/model.  However, outside of that scope, 
it is not networking-sfc's responsibility to care what ODL is using as it's 
dataplane backend or for that matter it's version of OVS.  It is now up to ODL 
to manage that for networking-sfc, and networking-sfc just needs to ensure it 
can talk to ODL.  

I think this is a pragmatic way to go, since networking-sfc doesn't yet support 
an ODL driver and we are in the process of adding one.  We could leave the 
networking-sfc OVS driver alone, add support for NSH to the networking-sfc 
plugin, and then only allow API calls that use NSH to work if ODL networking 
driver is the backend.  That way we allow for some experimental NSH support in 
networking-sfc without officially supporting it in the OVS driver until it is 
officially supported in OVS.

Tim Rozet
Red Hat SDN Team

- Original Message -
From: "Paul Carver" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Monday, May 30, 2016 10:12:34 PM
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On 5/25/2016 13:24, Tim Rozet wrote:
> In my opinion, it is a better approach to break this down into plugin vs 
> driver support.  There should be no problem adding support into 
> networking-sfc plugin for NSH today.  The OVS driver however, depends on OVS 
> as the dataplane - which I can see a solid argument for only supporting an 
> official version with a non-NSH solution.  The plugin side should have no 
> dependency on OVS.  Therefore if we add NSH SFC support to an ODL driver in 
> networking-odl, and use that as our networking-sfc driver, the argument about 
> OVS goes away (since neutron/networking-sfc is totally unaware of the 
> dataplane at this point).  We would just need to ensure that API calls to 
> networking-sfc specifying NSH port pairs returned error if the enabled driver 
> was OVS (until official OVS with NSH support is released).
>

Does ODL have a dataplane? I thought it used OvS. Is the ODL project supporting 
its own fork of OvS that has NSH support or is ODL expecting that the user will 
patch OvS themself?

I don't know the details of why OvS hasn't added NSH support so I can't judge 
the validity of the concerns, but one way or another there has to be a 
production-quality dataplane for networking-sfc to front-end.

If ODL has forked OvS or in some other manner is supporting its own NSH capable 
dataplane then it's reasonable to consider that the ODL driver could be the 
first networking-sfc driver to support NSH. However, we still need to make sure 
that the API is an abstraction, not implementation specific.

But if ODL is not supporting its own NSH capable dataplane, instead expecting 
the user to run a patched OvS that doesn't have upstream acceptance then I 
think we would be building a rickety tower by piling networking-sfc on top of 
that unstable base.




Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-06-01 Thread Cathy Zhang
Looks like the work of removing the mandatory L3 requirement associated with 
decapsulated VxLAN-gpe packet also involves OVS kernel change, which is 
difficult. Furthermore, even this blocking issue is resolved and eventually OVS 
accepts the VLAN-gpe+NSH encapsulation, there is still another issue. 
Current Neutron only supports VXLAN, not VXLAN-gpe, and adopting VXLAN-gpe 
involves consideration of backward compatibility with existing VXLAN VTEP and 
VXLAN Gateway. 

An alternative and maybe easier/faster path could be to push a patch of " VxLAN 
+ Eth + NSH + Original frame" into OVS kernel. This is also IETF compliant 
encapsulation for SFC and does not have the L3 requirement issue and Neutron 
VXLAN-gpe support issue. 

We can probably take this discussion to the OVS mailing alias. 

Thanks,
Cathy

-Original Message-
From: Ben Pfaff [mailto:b...@ovn.org] 
Sent: Tuesday, May 31, 2016 9:48 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On Wed, Jun 01, 2016 at 12:08:23AM +, Yang, Yi Y wrote:
> Ben, yes, we submitted nsh support patch set last year, but ovs 
> community told me we have to push kernel part into Linux kernel tree, 
> we're struggling to do this, but something blocked us from doing this.

It's quite difficult to get patches for a new protocol into the kernel.
You have my sympathy.

> Recently, ovs did some changes in tunnel protocols which requires the 
> packet decapsulated by a tunnel must be a Ethernet packet, but Linux 
> kernel (net-next) tree accepted VxLAN-gpe patch set from Redhat guy 
> (Jiri Benc) which requires the packet decapsulated by VxLAN-gpe port 
> must be L3 packet but not L2 Ethernet packet, this blocked us from 
> progressing better.
> 
> Simon Horman (Netronome guy) has posted a series of patches to remove 
> the mandatory requirement from ovs in order that the packet from a 
> tunnel can be any packet, but so far we didn't see they are merged.

These are slowly working their way through OVS review, but these also have a 
prerequisite on kernel patches, so it's not easy to get them in either.

> I heard ovs community looks forward to getting nsh patches merged, it 
> will be great if ovs guys can help progress this.

I do plan to do my part in review (but much of this is kernel review, which I'm 
not really involved in anymore).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: keystone federation user story

2016-06-01 Thread Dolph Mathews
On Wed, May 25, 2016 at 2:57 AM Jamie Lennox  wrote:

> On 25 May 2016 at 03:55, Alexander Makarov  wrote:
>
>> Colleagues,
>>
>> here is an actual use case for shadow users assignments, let's discuss
>> possible solutions: all suggestions are appreciated.
>>
>> -- Forwarded message --
>> From: Andrey Grebennikov 
>> Date: Tue, May 24, 2016 at 9:43 AM
>> Subject: keystone federation user story
>> To: Alexander Makarov 
>>
>>
>> Main production usecase:
>> As a system administrator I need to create assignments for federated
>> users into the projects when the user has not authenticated for the first
>> time.
>>
>> Two different approaches.
>> 1. A user has to be assigned directly into the project with the role
>> Role1. Since shadow users were implemented, Keystone database has the
>> record of the user when the federated user authenticates for the first
>> time. When it happens, the user gets unscoped token and Keystone registers
>> the user in the database with generated ID (the result of hashing the name
>> and the domain). At this point the user cannot get scoped token yet since
>> the user has not been assigned to any project.
>> Nonetheless there was a bug
>> https://bugs.launchpad.net/keystone/+bug/1313956 which was abandoned,
>> and the reporter says that currently it is possible to assign role in the
>> project to non-existing user (API only, no CLI). It doesn't help much
>> though since it is barely possible to predict the ID of the user if it
>> doesn't exist yet.
>>
>> Potential solution - allow per-user project auto-creation. This will
>> allow the user to get scoped token with a pre-defined role (should be
>> either mentioned in config or in mapping) and execute operations right away.
>>
>> Disadvantages: less control and order (will potentially end up with
>> infinite empty projects).
>> Benefits: user is authorized right away.
>>
>
> This is something that has come up a few times as a workflow problem. For
> some group of users you should end up with your own project that doesn't
> exist until the first time you log in. This is something i think we could
> extend the mapper to handle. It wouldn't be user authenticated immediately,
> just solve the workflow of personal projects.
>

I completely agree with the solution that Jamie is describing here
(although I think it has even more potential than just personal projects),
and attempted to capture it in this keystone spec:

  https://review.openstack.org/#/c/324055/


>
>
>> Another potential solution - clearly describe a possibility to assign
>> shadow user to a project (client should generate the ID correctly), even
>> though the user has not been authenticated for the first time yet.
>>
>> Disadvantages: high risk of administrator's mistake when typing user's ID.
>> Benefits: user doesn't have to execute first dummy authentication in
>> order to be registered.
>>
>
> I would prefer not to do this. It either involves creating a user and then
> somehow associating what federated information they will present with
> later, or allowing you to create a user with a predetermined or predictable
> id. I dont think we should add either of those APIs.
>
>
>>
>> 2. Operate with the groups. It means that the user is a member of the
>> remote group and we propose the groups to be assigned to the projects
>> instead of the users.
>> There is no concept of shadow groups yet, so it still has to be
>> implemented.
>>
>> Same problem - in order to be able to assign the group to the project
>> currently it has to exist in Keystone database.
>>
>
> I'm not sure what you want for shadow groups here. Groups are always a
> keystone concept, they have never been ephemeral in the way that federated
> users used to be. Are you wanting  to make it so that keystone groups are
> auto created?
>
> Mapping federated users into groups has always been the way federation was
> designed in keystone because even though you can't know the actual users
> that are going to log in, it is very likely they fall into something that
> can fairly easily be categorized by looking at the roles that come in from
> the IDP assertion. So your mapping does something like "if user has the
> admin role put them in the federated-admin group", the federated-admin
> group has already been established and already has roles on a number of
> projects. Users are then automatically granted those roles on those
> projects. You could go so far as to check for user names in the mapping
> here but that's not a sustainable solution.
>
>
>> It should be either allowed to pre-create the project for a group (based
>> on some specific flags in mappings),
>>
>
> maybe - if you created the groups why don't you know the projects they are
> going to be in?
>
>
>> or it should be allowed to assign non-existing groups into the projects.
>>
>
> still not sure on this non-existing groups concept.
>
>
>>
>> 

Re: [openstack-dev] [ironic] Virtual midcycle date poll

2016-06-01 Thread Jim Rollenhagen
On Thu, May 19, 2016 at 09:25:18AM -0400, Jim Rollenhagen wrote:
> Hi Ironickers,
> 
> We decided in our last meeting that the midcycle for Newton will again
> be virtual. Now, we need to choose a date. Please indicate which options
> work for you (more than one may be selected):
> 
> http://doodle.com/poll/gpug7ynd9fn4rdfe
> 
> I'll close this poll two Mondays from now, May 30.
> 
> Note that this will be similar to the last midcycle; likely split up
> into two sessions. Last time was 1500-2000 UTC and -0400 UTC. If
> that worked for folks, we'll do the same times again.

June 20-22 won, with the votes being 18 to 14.

The actual dates UTC will be something like:

June 20 1500-2000
June 21 -0400
June 21 1500-2000
June 22 -0400
June 22 1500-2000
June 23 -0400

I'll send out communication channels and such before the end of next
week.

See you all there!

// jim

> 
> Thanks!
> 
> // jim
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-17, June 6-10

2016-06-01 Thread Doug Hellmann
Focus
-

Leading into the second milestone, new feature development should
be under way and specs should be approved or close to approved.

First milestone tags are due this week.

General Notes
-

We've had a few library release requests filed late in the week.
Please keep in mind that we prefer releases of libs and other things
with dependencies early in the week, and may postpone processing a
release request on a Friday until the next week.

Release Actions
---

File milestone release requests for projects using the
cycle-with-milestone model by tomorrow, June 2.  Milestones are
considered betas, and should have version numbers like X.0.0.0b1
where X is the next major version number for your deliverables.

Projects following the cycle-with-intermediary, cycle-trailing, or
independent release models can also request releases, of course.

This is also a good time to review stable/liberty and stable/mitaka
branches for needed releases

Important Dates
---

Newton 1 milestone, June 2 (tomorrow).
Newton 2 milestone, July 14.

Newton release schedule: http://releases.openstack.org/newton/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] [nova] live migration, libvirt 1.3, and the gate

2016-06-01 Thread Sean Dague
On 06/01/2016 01:33 PM, Matt Riedemann wrote:

> 
> Sounds like there was a bad check in nova which is fixed here:
> 
> https://review.openstack.org/#/c/323467/
> 
> And a d-g change depends on that here:
> 
> https://review.openstack.org/#/c/320925/
> 
> Is there anything more to do for this? I'm assuming we should backport
> the nova change to the stable branches because the d-g change is going
> to break those multinode jobs on stable, although they are already
> non-voting jobs so it doesn't really matter. But if we knowingly break
> those jobs on stable branches, we should fix them to work or exclude
> them from running on stable branch changes since it'd be a waste of test
> resources.

The intent is to backport them. We probably can land the d-g change
without waiting for the backports, but they are super straight forward,
so should be easy to go in quick.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][diskimage-builder] Proposing Stephane Miller to dib-core

2016-06-01 Thread Gregory Haynes
Hello everyone,

I'd like to propose adding Stephane Miller (cinerama) to the
diskimage-builder core team. She has been a huge help with our reviews
for some time now and I think she would make a great addition to our
core team. I know I have benefited a lot from her bash expertise in many
of my reviews and I am sure others have as well :).

I've spoken with many of the active cores privately and only received
positive feedback on this, so rather than use this as an all out vote
(although feel free to add your ++'s) I'd like to use this as a final
call out in case any objections are wanting to be made. If none have
been made by next Wednesday (6/8) I'll go ahead and add her to dib-core.

Cheers,
Greg

-- 
  Gregory Haynes
  g...@greghaynes.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] [nova] live migration, libvirt 1.3, and the gate

2016-06-01 Thread Matt Riedemann

On 5/31/2016 8:36 AM, Daniel P. Berrange wrote:

On Tue, May 31, 2016 at 08:19:33AM -0400, Sean Dague wrote:

On 05/30/2016 06:25 AM, Kashyap Chamarthy wrote:

On Thu, May 26, 2016 at 10:55:47AM -0400, Sean Dague wrote:

On 05/26/2016 05:38 AM, Kashyap Chamarthy wrote:

On Wed, May 25, 2016 at 05:42:04PM +0200, Kashyap Chamarthy wrote:

[...]


So, in short, the central issue seems to be this: the custom 'gate64'
model is not being trasnalted by libvirt into a model that QEMU can
recognize.


An update:

Upstream libvirt points out that this turns to be regression, and
bisected it to commit (in libvirt Git): 1.2.9-31-g445a09b -- "qemu:
Don't compare CPU against host for TCG".

So, I expect there's going to be fix pretty soon upstream libvirt.


Which is good... I wonder how long we'll be waiting for that back in our
distro packages though.


Yeah, until the fix lands, our current options seem to be:

  (a) Revert to a known good version of libvirt


Downgrading libvirt so dramatically isn't a thing we'll be able to do.


  (b) Use nested virt (i.e. ) -- I doubt is possible
  on RAX environment, which is using Xen, last I know.


We turned off nested virt even where it was enabled, because it locks up
at a non trivial rate. So not really an option.


Hmm, if the guest is using 'qemu' and not 'kvm', then there should be
no dependancy between the host CPU and guest CPU whatsoever. ie we can
present arbitrary CPU to the guest, whether the host CPU has matching
features or not.

I wonder if there is a bug in Nova where it is trying todo a host/guest
CPU compatibility check even for 'qemu' guests, when it should only do
them for 'kvm' guests.

If we can avoid the CPU compatibility check with qemu guest, then the
fact that there's a libvirt bug here should be irrelevant, and we could
avoid needing to invent a gate64 CPU model too.


Regards,
Daniel



Sounds like there was a bad check in nova which is fixed here:

https://review.openstack.org/#/c/323467/

And a d-g change depends on that here:

https://review.openstack.org/#/c/320925/

Is there anything more to do for this? I'm assuming we should backport 
the nova change to the stable branches because the d-g change is going 
to break those multinode jobs on stable, although they are already 
non-voting jobs so it doesn't really matter. But if we knowingly break 
those jobs on stable branches, we should fix them to work or exclude 
them from running on stable branch changes since it'd be a waste of test 
resources.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Does Murano support version management?

2016-06-01 Thread Stan Lagun
Hi,

Murano does have partial support for version management. Murano
(deployment) engine can work with several versions of the same package. You
can even have them in single environment. I order to store several such
packages you must use Glare backend rather than old API backend and must
use recent Murano version. But there is no support in UI yet. So if you
plan to use CLI or API for automation then it is doable. However if you
want to choose package version in the dashboard then we are not there yet.

Also note that the package version is for the Murano application, not for
the software that this app installs. So if you want to let user choose what
version of the software to install this can be either one of application's
inputs or there can be several independent Murano apps (one per version +
library package to put common code there)

See [1] and [2] for more information.

[1]:
http://docs.openstack.org/developer/murano/draft/appdev-guide/murano_pl.html#versioning
[2]:
http://murano-specs.readthedocs.io/en/latest/specs/liberty/murano-versioning.html

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis



On Tue, May 31, 2016 at 10:20 PM, Jay Lau  wrote:

> Hi,
>
> I have a question for Murano: Suppose I want to manage two different
> version Spark packages, does Murano can enable me create one Application in
> application catalog but can enable me select different version spark
> packages to install?
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.messaging 5.2.0 release (newton)

2016-06-01 Thread no-reply
We are thrilled to announce the release of:

oslo.messaging 5.2.0: Oslo Messaging API

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.messaging

With package available at:

https://pypi.python.org/pypi/oslo.messaging

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

For more details, please see below.

5.2.0
^

Other Notes

* Switch to reno for managing release notes.

Changes in oslo.messaging 5.1.0..5.2.0
--

4b0e247 Updated from global requirements
5fb8f26 Modify the TransportURL's docstrings
9ccfbdd Fix problems after refactoring RPC client
4d0f7ab deprecate usage of transport aliases
a7e5a42 kafka: Deprecates host, port options
f6e1e0a Updated from global requirements
0991a69 Add reno for releasenotes management
39749c7 Remove logging from serialize_remote_exception
99b8437 [kafka] Add several bootstrap servers support
43cfc18 Fix consuming from missing queues
32a7c1c Fix bug with version_cap and target.version in RPCClient
6726025 Make TransportURL.parse aware of transport_url
2f0d53b rabbit: Deprecates host, port, auth options
5dd059a Remove deprecated localcontext
88e26a5 zeromq: Deprecates host, port options
e72a8d5 Reorganize the AMQP 1.0 driver source files
63de855 Implements configurable connection factory
00bb55e The need to wait for a given time is no longer valid in 3.2+
4efd2d9 [zmq] Reduce object serialization on router proxy
6037b2b Updated from global requirements
9cdc9e0 [zmq] Add backend ROUTER to increase bandwidth
681c9fe [zmq] Add Sentinel instructions to deployment guide
776871f Rabbit driver: failure of rpc-calls with float timeout
53aa3a5 Use eventletutils to check is_monkey_patched
e65539b [zmq] Second router proxy doesn't dispatch messages properly
2aab5a6 Add parse.unquote to transport_url
042fef5 Use single producer and to avoid an exchange redeclaration
82602ae Refactor RPC client

Diffstat (except docs and test files)
-

.gitignore |1 +
oslo_messaging/__init__.py |1 -
oslo_messaging/_cmd/zmq_proxy.py   |   14 +-
oslo_messaging/_drivers/amqp1_driver/__init__.py   |0
oslo_messaging/_drivers/amqp1_driver/controller.py |  747 ++
.../_drivers/amqp1_driver/drivertasks.py   |  111 ++
oslo_messaging/_drivers/amqp1_driver/eventloop.py  |  345 +++
oslo_messaging/_drivers/amqp1_driver/opts.py   |  100 ++
oslo_messaging/_drivers/amqpdriver.py  |   10 +-
oslo_messaging/_drivers/base.py|2 +-
oslo_messaging/_drivers/common.py  |7 +-
oslo_messaging/_drivers/impl_amqp1.py  |  299 ++
oslo_messaging/_drivers/impl_fake.py   |2 +-
oslo_messaging/_drivers/impl_kafka.py  |   31 +-
oslo_messaging/_drivers/impl_pika.py   |   26 +-
oslo_messaging/_drivers/impl_rabbit.py |   91 +-
.../_drivers/pika_driver/pika_commons.py   |   14 -
.../_drivers/pika_driver/pika_connection.py|   55 +-
.../pika_driver/pika_connection_factory.py |  307 ++
oslo_messaging/_drivers/pika_driver/pika_engine.py |  275 ++---
.../_drivers/pika_driver/pika_message.py   |4 +-
oslo_messaging/_drivers/pool.py|   15 +-
oslo_messaging/_drivers/protocols/__init__.py  |0
oslo_messaging/_drivers/protocols/amqp/__init__.py |0
.../_drivers/protocols/amqp/controller.py  |  747 --
oslo_messaging/_drivers/protocols/amqp/driver.py   |  299 --
.../_drivers/protocols/amqp/drivertasks.py |  112 --
.../_drivers/protocols/amqp/eventloop.py   |  345 ---
oslo_messaging/_drivers/protocols/amqp/opts.py |  100 --
.../_drivers/zmq_driver/broker/zmq_queue_proxy.py  |   87 +-
.../dealer/zmq_dealer_publisher_proxy.py   |   44 +-
.../client/publishers/dealer/zmq_reply_waiter.py   |   19 +-
.../client/publishers/zmq_pub_publisher.py |   26 +-
.../client/publishers/zmq_publisher_base.py|   11 +
.../_drivers/zmq_driver/client/zmq_response.py |   11 +-
.../zmq_driver/matchmaker/matchmaker_redis.py  |8 +
.../server/consumers/zmq_dealer_consumer.py|   77 +-
.../server/consumers/zmq_pull_consumer.py  |2 +-
.../server/consumers/zmq_router_consumer.py|2 +-
.../server/consumers/zmq_sub_consumer.py   |   50 +-
.../zmq_driver/server/zmq_incoming_message.py  |8 +-
oslo_messaging/_drivers/zmq_driver/zmq_names.py|   26 +-
oslo_messaging/_drivers/zmq_driver/zmq_socket.py   |   14 +-
oslo_messaging/_utils.py   |6 +
oslo_messaging/conffixture.py  |2 +-
oslo_messaging/localcontext.py |   85 --
oslo_messaging/notify/dispatcher.py|5 -

Re: [openstack-dev] [puppet] Discussion of PuppetOpenstack Project abbreviation

2016-06-01 Thread Cody Herriges

> On Jun 1, 2016, at 5:56 AM, Dmitry Tantsur  wrote:
> 
> On 06/01/2016 02:20 PM, Jason Guiditta wrote:
>> On 01/06/16 18:49 +0800, Xingchao Yu wrote:
>>>  Hi, everyone:
>>> 
>>>  Do we need to give a abbreviation for PuppetOpenstack project? B/C
>>>  it's really a long words when I introduce this project to people or
>>>  writng article about it.
>>> 
>>>  How about POM(PuppetOpenstack Modules) or POP(PuppetOpenstack
>>>  Project) ?
>>> 
>>>  I would like +1 for POM.
>>>  Just an idea, please feel free to give your comment :D
>>>  Xingchao Yu
>> 
>> For rdo and osp, we package it as 'openstack-puppet-modules', or OPM
>> for short.
> 
> I definitely love POM as it reminds me of pomeranians :) but I agree that OPM 
> will probably be easier recognizable.

The project's official name is in fact "Puppet OpenStack" so OPM would be kinda 
confusing.  I'd put my vote on POP because it is closer to the actual 
definition of an acronym[1], which I generally find easier to remember over all 
when it comes to the shortening of long phrases.

[1] http://www.merriam-webster.com/dictionary/acronym

--
Cody



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [diskimage-builder] Howto refactor?

2016-06-01 Thread Andre Florath
Hi!

Created a first draft of a 'Big Picture' document including an overview
and a technical breakdown [1].
I'm not sure if this is what you expect or if still something is
missing?

Maybe we can start with etherpad - and do the first some iterations
there. Later on I'd like to see it moved to the dib repo (IMHO
requirements, design documents and source code should live together :-) ).

I'll have a detailed look at your suggestions about splitting things apart
and will come up with a proposal.

Kind regards

Andre

[1] https://etherpad.openstack.org/p/C80jjsAs4x


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Today's Trove meeting is canceled

2016-06-01 Thread Amrith Kumar
Nothing new has been added to the agenda. Today's meeting is canceled.

If anything urgent comes up, use #openstack-trove.

Thanks,

-amrith

> -Original Message-
> From: Amrith Kumar
> Sent: Wednesday, June 01, 2016 9:17 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [trove] Reminder: Trove meeting this afternoon
> 
> A reminder to everyone that there's a Trove meeting scheduled for 2pm
> Eastern Time today.
> 
> Currently there is nothing on the agenda. If there's nothing on the agenda
> by ~ 60m before the meeting, I propose that we cancel the meeting.
> 
> Agenda is at https://wiki.openstack.org/wiki/Meetings/TroveMeeting
> 
> Please update if you have something you'd like to discuss.
> 
> Thanks,
> 
> -amrith

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] enabling new topologies

2016-06-01 Thread Stephen Balukoff
Hey Sergey--

Apologies for the delay in my response. I'm still wrapping my head around
your option 2 suggestion and the implications it might have for the code
base moving forward. I think, though, that I'm against your option 2
proposal and in favor of option 1 (which, yes, is more work initially) for
the following reasons:

A. We have a precedent in the code tree with how the stand-alone and
active-standby topologies are currently being handled. Yes, this does
entail various conditionals and branches in tasks and flows-- which is not
really that ideal, as it means the controller worker needs to have more
specific information on how topologies work than I think any of us would
like, and this adds some rigidity to the implementation (meaning 3rd party
vendors may have more trouble interfacing at that level)...  but it's
actually "not that bad" in many ways, especially given we don't anticipate
supporting a large or variable number of topologies. (stand-alone,
active-standby, active-active... and then what? We've been doing this for a
number of years and nobody has mentioned any radically new topologies they
would like in their load balancing. Things like auto-scale are just a
specific case of active-active).

B. If anything Option 2 builds more less-obvious rigidity into the
implementation than option 1. For example, it makes the assumption that the
distributor is necessarily an amphora or service VM, whereas we have
already heard that some will implement the distributor as a pure network
routing function that isn't going to be managed the same way other amphorae
are.

C. Option 2 seems like it's going to have a lot more permutations that
would need testing to ensure that code changes don't break existing /
potentially supported functionality. Option 1 keeps the distributor and
amphorae management code separate, which means tests should be more
straight-forward, and any breaking changes which slip through potentially
break less stuff. Make sense?

Stephen


On Sun, May 29, 2016 at 7:12 AM, Sergey Guenender  wrote:

> I'm working with the IBM team implementing the Active-Active N+1 topology
> [1].
>
> I've been commissioned with the task to help integrate the code supporting
> the new topology while a) making as few code changes and b) reusing as much
> code as possible.
>
> To make sure the changes to existing code are future-proof, I'd like to
> implement them outside AA N+1, submit them on their own and let the AA N+1
> base itself on top of it.
>
> --TL;DR--
>
> what follows is a description of the challenges I'm facing and the way I
> propose to solve them. Please skip down to the end of the email to see the
> actual questions.
>
> --The details--
>
> I've been studying the code for a few weeks now to see where the best
> places for minimal changes might be.
>
> Currently I see two options:
>
>1. introduce a new kind of entity (the distributor) and make sure it's
> being handled on any of the 6 levels of controller worker code (endpoint,
> controller worker, *_flows, *_tasks, *_driver)
>
>2. leave most of the code layers intact by building on the fact that
> distributor will inherit most of the controller worker logic of amphora
>
>
> In Active-Active topology, very much like in Active/StandBy:
> * top level of distributors will have to run VRRP
> * the distributors will have a Neutron port made on the VIP network
> * the distributors' neutron ports on VIP network will need the same
> security groups
> * the amphorae facing the pool member networks still require
> * ports on the pool member networks
> * "peers" HAProxy configuration for real-time state exchange
> * VIP network connections with the right security groups
>
> The fact that existing topologies lack the notion of distributor and
> inspecting the 30-or-so existing references to amphorae clusters, swayed me
> towards the second option.
>
> The easiest way to make use of existing code seems to be by splitting
> load-balancer's amphorae into three overlapping sets:
> 1. The front-facing - those connected to the VIP network
> 2. The back-facing - subset of front-facing amphorae, also connected to
> the pool members' networks
> 3. The VRRP-running - subset of front-facing amphorae, making sure the VIP
> routing remains highly available
>
> At the code-changes level
> * the three sets can be simply added as properties of
> common.data_model.LoadBalancer
> * the existing amphorae cluster references would switch to using one of
> these properties, for example
> * the VRRP sub-flow would loop over only the VRRP amphorae
> * the network driver, when plugging the VIP, would loop over the
> front-facing amphorae
> * when connecting to the pool members' networks,
> network_tasks.CalculateDelta would only loop over the back-facing amphorae
>
>-
>
> In terms of backwards compatibility, Active-StandBy topology would have
> the 3 sets equal and contain both of its amphorae.
>
> An even more 

Re: [openstack-dev] [Neutron][Release] Changing release model for *-aas services

2016-06-01 Thread Mark Voelker

> On Jun 1, 2016, at 12:27 PM, Armando M.  wrote:
> 
> 
> 
> On 1 June 2016 at 02:28, Thierry Carrez  wrote:
> Armando M. wrote:
> Having looked at the recent commit volume that has been going into the
> *-aas repos, I am considering changing the release model for
> neutron-vpnaas, neutron-fwaas, neutron-lbaas
> from release:cycle-with-milestones [1] to
> release:cycle-with-intermediary [2]. This change will allow us to avoid
> publishing a release at fixed times when there's nothing worth releasing.
> 
> I commented on the review, but I think it's easier to discuss this here...
> 
> Beyond changing the release model, what you're proposing here is to remove 
> functionality from an existing deliverable ("neutron" which was a combination 
> of openstack/neutron and openstack/neutron-*aas, released together) and 
> making the *aas things separate deliverables.
> 
> All I wanted to do is change the release model of the *-aas projects, without 
> side effects. I appreciate that the governance structure doesn't seem to 
> allow this easily, and I am looking for guidance.
>   
> 
> From a Defcore perspective, the trademark programs include the "neutron" 
> deliverable. So the net effect for DefCore is that you remove functionality 
> -- and removing functionality from a Defcore-used project needs extra care 
> and heads-up time.
> 
> To the best of my knowledge none of the *-aas projects are part of defcore, 
> and since [1] has no presence of vpn, fw, lb, nor planned, I thought I was on 
> the safe side.
> 

Thanks for checking.  You are correct: LBaaS, VPNaaS, and FWaaS capabilities 
are not present in existing Board-approved DefCore Guidelines, nor have they 
been proposed for the next one. [2]

[2] http://git.openstack.org/cgit/openstack/defcore/tree/next.json

At Your Service,

Mark T. Voelker
DefCore Committee Co-Chair

> 
> It's probably fine to remove *-aas from the neutron deliverable if there is 
> no Defcore capability or designated section there (current or planned). 
> Otherwise we need to have a longer conversation that is likely to extend 
> beyond the release model deadline tomorrow.
> 
> I could not see one in [1]
>  
> [1] https://github.com/openstack/defcore/blob/master/2016.01.json 
> 
> 
> 
> -- 
> Thierry Carrez (ttx)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage is now an official OpenStack project

2016-06-01 Thread Afek, Ifat (Nokia - IL)
Hi,

Great news - Vitrage (OpenStack RCA service) was accepted into the big tent 
yesterday, in the TC IRC meeting[1]. 

Thanks to everyone who participated in this huge effort of creating a whole 
project from scratch in a little more than 6 months. 

Now, with the wind in our sails, we invite you to join us on our exciting 
journey. New contributors are more than welcome to suggest blueprints, 
participate in the discussions and write code. All relevant information and 
contact info is detailed in Vitrage wiki page[2].

Thanks,
Ifat.

[1] https://review.openstack.org/#/c/320296 
[2] https://wiki.openstack.org/wiki/Vitrage 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] is l7 rule using routed L3 connectivity

2016-06-01 Thread Stephen Balukoff
Hello Yong Sheng Gong!

Apologies for the lateness of my reply (I've been intermittently available
over the last month and am now just catching up on ML threads). Anyway, did
you get your question answered elsewhere? It looks like you've discovered a
bug in the behavior here-- when you created the member on server_subnet2,
an interface should have been added to the amphora in the amphora-haproxy
namespace. If you haven't yet filed a bug report on this, could you?

In any case, the behavior you're seeing (which is not correct in this case)
is that if the amphora doesn't have a directly-connected interface is that
it will use its default route to attempt to reach the member.

Stephen

On Tue, May 10, 2016 at 12:01 AM, 龚永生  wrote:

> Hi, Stephen,
>
> By running following commands:
> neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --loadbalancer
> 85800051-31fb-4ca0-962d-8835656a61ef --protocol HTTP --name pool2
>
> neutron net-create server_net2
>
> neutron subnet-create server_net2 10.20.2.0/24 --name server_subnet2
>
> neutron lbaas-member-create --subnet server_subnet2 --address 10.20.2.10
> --protocol-port 8080 pool2
>
> neutron lbaas-l7policy-create --name policy1  --action REDIRECT_TO_POOL
> --redirect-pool pool2 --listener 8ec3a2e5-8cb5-472e-a12c-f067eefa4b7a
>
> neutron lbaas-l7rule-create --type PATH --compare-type STARTS_WITH --value
> "/api" policy1
>
> I found there is no interface on server_subnet2 in namespace amphora-haproxy
> on amphora:
> ubuntu@amphora-86359e7c-f473-41c3-9531-c8bf129ec6b7:~$ sudo ip netns exec
> amphora-haproxy ip a
> 1: lo:  mtu 65536 qdisc noop state DOWN group default
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 3: eth1:  mtu 1450 qdisc pfifo_fast state
> UP group default qlen 1000
> link/ether fa:16:3e:5f:4c:c1 brd ff:ff:ff:ff:ff:ff
> inet 10.20.0.33/24 brd 10.20.0.255 scope global eth1
>valid_lft forever preferred_lft forever
> inet 10.20.0.32/24 brd 10.20.0.255 scope global secondary eth1:0
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fe5f:4cc1/64 scope link
>valid_lft forever preferred_lft forever
> 4: eth2:  mtu 1450 qdisc pfifo_fast state
> UP group default qlen 1000
> link/ether fa:16:3e:8a:3d:f5 brd ff:ff:ff:ff:ff:ff
> inet 10.20.1.5/24 brd 10.20.1.255 scope global eth2
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fe8a:3df5/64 scope link
>valid_lft forever preferred_lft forever
>
> Is L7 policy using "*Routed (layer-3) connectivity"?*
>
> *Thanks*
> *yong sheng gong*
> --
> *龚永生*
> *九州云信息科技有限公司 99CLOUD Co. Ltd.*
> 邮箱(Email):gong.yongsh...@99cloud.net 
> 地址:北京市海淀区上地三街嘉华大厦B座806
> Addr : Room 806, Tower B, Jiahua Building, No. 9 Shangdi 3rd Street,
> Haidian District, Beijing, China
> 手机(Mobile):+86-18618199879
> 公司网址(WebSite):http://99cloud.net 
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [diskimage-builder] Howto refactor?

2016-06-01 Thread Gregory Haynes
On Wed, Jun 1, 2016, at 01:06 AM, Ian Wienand wrote:
> On 06/01/2016 02:10 PM, Andre Florath wrote:
> > My long term goal is, to add some functionality to the DIB's block
> > device layer, like to be able to use multiple partitions, logical
> > volumes and mount points.
> 
> Some thoughts...
> 
> There's great specific info in the readme's of the changes you posted
> ... but I'm missing a single big picture context of what you want to
> build on-top of all this and how the bits fit into it.  We don't have
> a formalised spec or blueprint process, but something that someone who
> knows *nothing* about this can read and follow through will help; I'd
> suggest an etherpad, but anything really.  At this point, you are
> probably the world-expert on dib's block device layer, you just need
> to bring the rest of us along :)

++, but one clarification: We do have a spec process which is to use the
tripleo-specs repo. Since this is obviously not super clear and there is
a SnR issue for folks who are only dib core maybe we should move specs
in to the dib repo?

I also agree that some type of overview is extremely useful. I hesitate
to recommend writing specs because of how much extra work it tends to be
for all of us, but I honestly think that a couple of these features
could individually use specs - more detail in review comments on
https://review.openstack.org/#/c/319591/5.

> 
> There seems to be a few bits that are useful outside the refactor.
> Formalising python elements, extra cleanup phases, dib-run-parts
> fixes, etc.  Splitting these out, we can get them in quicker and it
> reduces the cognitive load for the rest.  I'd start there.

Splitting these out would help a lot. This whole set of features is
going to take a while to iterate on (sorry! - reviewer capacity is
limited and there are big changes here) and a few of these are pretty
straightforward things I think we really want (such as the cleanup
phase). There's also a lot of risk to us in merging large changes since
we are the poster child for how having external dependencies makes
testing hard. Making smaller changes lets us release / debug /
potentially revert them individually which is a huge win.

> 
> #openstack-infra is probably fine to discuss this too, as other dib
>   knowledgeable people hang out there.
> 
> -i

As for what to do about the existing and potentially conflicting changes
-  that's harder to answer. I think there's a very valid concern from
the original authors about scope creep of their original goal. We also,
obviously, don't want to land something that will make it more difficult
for us to enhance later on.

I think with the LVM patch there actually isn't much of risk to making
your work more difficult - the proposed change is pretty small and has a
small input surface area - it should be easy to preserve its behavior
while also supporting a more general solution. For the EFI change there
are some issues you've hit on that need to be fixed, but I am not sure
they are going to require basing the change off  a more general fix. It
might be as easy as copying the element contents in to a new dir when a
more general solution is completed in which case getting the changes
completed in smaller portions is more beneficial IMO.

I also wanted to say thanks a ton for the work and reviews - it is
extremely useful stuff and we desperately need the help. :)

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Docker-compose support

2016-06-01 Thread Fox, Kevin M
I've been holding off, but I'll chime in now.

I believe Higgins should be about abstracting away differences in the container 
systems where there are needless differences the user really could care less 
about. IE,
"here, launch this container" can be done:
 * k8s pod create ...
 * docker run ...
 * docker-compose ...
 * nova boot --user-data "#!/bin/bash\ndocker run ..." ...
 * #insert mesos cli here... I don't know it off hand...

But, if you start building up COE like functionality on a base where there is 
no LCD functionality, you end up having to reimplement a COE as the underlying 
COE can't do the things you want and you use it as just a container launcher.

Now, since we're staring to talk about coming up with a new COE, I've got to 
insert obligatory standards xkcd here:
https://xkcd.com/927/

I recommend sticking to the stable, LCD like functionality for now. With the 
COE situation in flux, I think its likely either 1 of 4 things will happen.

1. Advanced features, like pods, will become so critical to developers that the 
COE's that don't support them will gain them, and then the LCD is reasonable 
again for advanced functionality. (likely)
2. The COE's that don't support features like pods will die out as they aren't 
useful, and then targeting the LCD is again reasonable. (also likely)
3. One COE rises to dominance pushing out the others. (possible)
4. People will forget about the need of features like pods as some other 
abstraction that ends up being more useful gets adopted. (unlikely)

Getting the basic, get containers launched, functionality is something that can 
be done today, and will immediately be useful to users. Doing COE advanced 
feature implementation will be a lot more work if you don't target the LCD, and 
may be needless for reasons listed above. Lets hold off and see where things 
settle.

In my personal experience converting apps to containers, I have not been able 
to live without the k8s notion of pods for a number of apps. I use either 
heat-templates + docker-compose or k8s to launch containers in sets that use 
unix sockets for communication. docker-swarm has been a non starter due to the 
lack of pods, and I haven't looked at mesos too much yet, as k8s has been 
working well for me.

Thanks,
Kevin


From: Denis Makogon [lildee1...@gmail.com]
Sent: Tuesday, May 31, 2016 11:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [higgins] Docker-compose support

Hello Hongbin.

I would disagree on what you are saying, because having Higgins doing too basic 
stuff is not very valuable. As for those of us who works with development and 
continuous delivery how can Higgins address, for example, micro-service 
chaining?

In any case Higgins eventually will end up having its own DSL (or TOSCA, or 
compose DSL) because there are not so much benefits from having API that only 
spin-up containers separately. Developers will, again, have to build solution 
over Higgins to support more advanced things like service chaining and that 
would mean that Higgins doesn't meet their requirements for further service 
consumption.


Kind regards,
Denys Makogon


2016-05-31 23:15 GMT+03:00 Hongbin Lu 
>:
I don’t think it is a good to re-invent docker-compose in Higgins. Instead, we 
should leverage existing libraries/tools if we can.

Frankly, I don’t think Higgins should interpret any docker-compose like DSL in 
server, but maybe it is a good idea to have a CLI extension to interpret 
specific DSL and translate it to a set of REST API calls to Higgins server. The 
solution should be generic enough so that we can re-use it to interpret another 
DSL (e.g. pod, TOSCA, etc.) in the future.

Best regards,
Hongbin

From: Denis Makogon [mailto:lildee1...@gmail.com]
Sent: May-31-16 3:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [higgins] Docker-compose support

Hello.

It is hard to tell if given API will be final version, but i tried to make it 
similar to CLI and its capabilities. So, why not?

2016-05-31 22:02 GMT+03:00 Joshua Harlow 
>:
Cool good to know,

I see 
https://github.com/docker/compose/pull/3535/files#diff-1d1516ea1e61cd8b44d000c578bbd0beR66

Would that be the primary API? Hard to tell what is the API there actually, 
haha. Is it the run() method?

I was thinking more along the line that higgins could be a 'interpreter' of the 
same docker-compose format (or similar format); if the library that is being 
created takes a docker-compose file and turns it into a 'intermediate' 
version/format that'd be cool. The compiled version would then be 'executable' 
(and introspectable to) by say higgins (which could say traverse over that 
intermediate version and activate its own code to turn the intermediate 
versions primitives 

Re: [openstack-dev] [Neutron][Release] Changing release model for *-aas services

2016-06-01 Thread Armando M.
On 1 June 2016 at 02:28, Thierry Carrez  wrote:

> Armando M. wrote:
>
>> Having looked at the recent commit volume that has been going into the
>> *-aas repos, I am considering changing the release model for
>> neutron-vpnaas, neutron-fwaas, neutron-lbaas
>> from release:cycle-with-milestones [1] to
>> release:cycle-with-intermediary [2]. This change will allow us to avoid
>> publishing a release at fixed times when there's nothing worth releasing.
>>
>
> I commented on the review, but I think it's easier to discuss this here...
>
> Beyond changing the release model, what you're proposing here is to remove
> functionality from an existing deliverable ("neutron" which was a
> combination of openstack/neutron and openstack/neutron-*aas, released
> together) and making the *aas things separate deliverables.
>

All I wanted to do is change the release model of the *-aas projects,
without side effects. I appreciate that the governance structure doesn't
seem to allow this easily, and I am looking for guidance.


>
> From a Defcore perspective, the trademark programs include the "neutron"
> deliverable. So the net effect for DefCore is that you remove functionality
> -- and removing functionality from a Defcore-used project needs extra care
> and heads-up time.
>

To the best of my knowledge none of the *-aas projects are part of defcore,
and since [1] has no presence of vpn, fw, lb, nor planned, I thought I was
on the safe side.


> It's probably fine to remove *-aas from the neutron deliverable if there
> is no Defcore capability or designated section there (current or planned).
> Otherwise we need to have a longer conversation that is likely to extend
> beyond the release model deadline tomorrow.


I could not see one in [1]

[1] https://github.com/openstack/defcore/blob/master/2016.01.json


>
> --
> Thierry Carrez (ttx)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr] Wasting so many external network IPs in DVR mode?

2016-06-01 Thread Carl Baldwin
On Wed, Jun 1, 2016 at 9:48 AM, zhi  wrote:
> hi, all
>
> I have some questions about north/south traffic in DVR mode.
>
> As we all know, packets will be sent to instance's  default gateway (qr
> interface) when an instance want to communicate to the external network.
> Next, these packets will be sent from rfp interface(qrouter interface) to
> the fpr interface(fip namespace) after NAT by iptables rules in qrouter
> namespace, Finally, packets will be forwarded by fg interface which exists
> in the fip namespace.
>
> I was so confused by the "fg" interface.
>
> The device owner of "fg" interface is "network:floatingip_agent_gateway"
> in Neutron. It is a special port which allocated from the external network.
> I think, in this way, we have to wasted many IP addresses from the external
> network. Because we need a dedicated router IP per compute node, didn't we?

Yes, this is correct.  We have a simple spec [1] in review to solve
this problem in Newton.  It will still require the same fg ports but
will allow you to pull the IP addresses for these ports from a private
address space so that your public IPs are not wasted on them.

> In DVR mode, why not we use "qg" interface in qrouter namespace? Just
> like the "Legacy L3 agent mode" !  We can also setup "qg" interface and "qr"
> interfaces in qrouter namespaces in DVR mode.

The main reason behind putting the routers behind the fip namespace,
was the number of mac addresses that you would need.  Each port needs
a unique mac address and some calculations showed that in some large
environments, the number of mac addresses flying around could stretch
the limits of mac address tables in switches and routers and cause
degraded performance.

Another thing is that it was not trivial to create a port without a
permanent IP address to host floating ips which can come and go at any
time.  It is also nice to have a permanent IP address on each port to
allow debugging.  A number of ideas were thrown around for how to
accomplish this but none ever came to fruition.  The spec I mentioned
[1] will help with this by allowing a permanent IP for each port from
a private pool of plentiful IP addresses to avoid wasting the public
ones.

> Maybe my thought was wrong, but I want to know what can we benefit from
> the "fip" namespace and the reason why we do not use "qg" interfaces in DVR
> mode just like Legacy L3 agent mode.
>
>
> Hope for your reply.  ;-)

Glad to help,
Carl

[1] https://review.openstack.org/#/c/300207/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-guide] First IRC meeting tomorrow

2016-06-01 Thread Edgar Magana
Folks,

This is a kind reminder that we are having our IRC meeting tomorrow at 1600 UTC 
in #openstack-meeting
Agenda:
https://etherpad.openstack.org/p/docnetworkingteam-agenda

Thanks,

Edgar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][dvr] Wasting so many external network IPs in DVR mode?

2016-06-01 Thread zhi
hi, all

I have some questions about north/south traffic in DVR mode.

As we all know, packets will be sent to instance's  default gateway (qr
interface) when an instance want to communicate to the external network.
Next, these packets will be sent from rfp interface(qrouter interface) to
the fpr interface(fip namespace) after NAT by iptables rules in qrouter
namespace, Finally, packets will be forwarded by fg interface which exists
in the fip namespace.

I was so confused by the "fg" interface.

The device owner of "fg" interface is
"network:floatingip_agent_gateway" in Neutron. It is a special port which
allocated from the external network. I think, in this way, we have to
wasted many IP addresses from the external network. Because we need a
dedicated router IP per compute node, didn't we?

In DVR mode, why not we use "qg" interface in qrouter namespace? Just
like the "Legacy L3 agent mode" !  We can also setup "qg" interface and
"qr" interfaces in qrouter namespaces in DVR mode.

Maybe my thought was wrong, but I want to know what can we benefit from
the "fip" namespace and the reason why we do not use "qg" interfaces in DVR
mode just like Legacy L3 agent mode.


Hope for your reply.  ;-)



Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [install-guide] Install guide from source

2016-06-01 Thread Spyros Trigazis
This work aims to be published to magnum's developer page:
http://docs.openstack.org/developer/magnum/

Cheers,
Spyros


On 1 June 2016 at 17:30, Andreas Jaeger  wrote:

> On 06/01/2016 05:21 PM, Spyros Trigazis wrote:
>
>> Hi everyone,
>>
>> Is the idea of having an install-guide from source and possibly
>> virtualenvs still under consideration?
>>
>> I'd like to share with you what we are currently doing along with
>> the install-guide based on the cookiecutter template.
>>
>> I have created this change [1] in our project repo. Although some
>> commands are ugly it works in the same way on Ubuntu, Fedora,
>> Suse and Debian. Since this change aims Newton release, we clone
>> from master, when we branch will update to clone from the stable
>> branch.
>>
>> Cheers,
>> Spyros
>>
>> [1] https://review.openstack.org/#/c/319399/
>>
>
> We will not have a full from source guide - let's grow the existing one
> first before adding another variation ;). The idea was AFAIR that projects
> can install from source if there are no packages for them.
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] State machines in Nova

2016-06-01 Thread Joshua Harlow

Miles Gould wrote:

On 31/05/16 21:03, Timofei Durakov wrote:

there is blueprint[1] that was approved during Liberty and resubmitted
to Newton(with spec[2]).
The idea is to define state machines for operations as live-migration,
resize, etc. and to deal with them operation states.


+1 to introducing an explicit state machine - IME they make complex
logic much easier to reason about. However, think carefully about how
you'll make changes to that state machine later. In Ironic, this is an
ongoing problem: every time we change the state machine, we have to
decide whether to lie to older clients (and if so, what lie to tell
them), or whether to present them with the truth (and if so, how badly
they'll break). AIUI this would be a much smaller problem if we'd
considered this possibility carefully at the beginning.


Do u have any more details (perhaps an 'real-life' example that you can 
walk us through) of this and how it played out. It'd be interesting to 
hear (I believe it has happened a few times but I've never heard how it 
was resolved or the details of it).




Miles

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [docs] [install-guide] Install guide from source

2016-06-01 Thread Matt Kassawara
You can't mix distribution packages and source on the same host without
using at least virtual environments for the latter.

On Wed, Jun 1, 2016 at 9:30 AM, Andreas Jaeger  wrote:

> On 06/01/2016 05:21 PM, Spyros Trigazis wrote:
>
>> Hi everyone,
>>
>> Is the idea of having an install-guide from source and possibly
>> virtualenvs still under consideration?
>>
>> I'd like to share with you what we are currently doing along with
>> the install-guide based on the cookiecutter template.
>>
>> I have created this change [1] in our project repo. Although some
>> commands are ugly it works in the same way on Ubuntu, Fedora,
>> Suse and Debian. Since this change aims Newton release, we clone
>> from master, when we branch will update to clone from the stable
>> branch.
>>
>> Cheers,
>> Spyros
>>
>> [1] https://review.openstack.org/#/c/319399/
>>
>
> We will not have a full from source guide - let's grow the existing one
> first before adding another variation ;). The idea was AFAIR that projects
> can install from source if there are no packages for them.
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> ___
> OpenStack-docs mailing list
> openstack-d...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-01 Thread Hongbin Lu
Hi team,

A blueprint was created for tracking this idea: 
https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-nodes . I 
won't approve the BP until there is a team decision on accepting/rejecting the 
idea.

From the discussion in design summit, it looks everyone is OK with the idea in 
general (with some disagreements in the API style). However, from the last team 
meeting, it looks some people disagree with the idea fundamentally. so I 
re-raised this ML to re-discuss.

If you agree or disagree with the idea of manually managing the Heat stacks 
(that contains individual bay nodes), please write down your arguments here. 
Then, we can start debating on that.

Best regards,
Hongbin

> -Original Message-
> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> Sent: May-16-16 5:28 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> The discussion at the summit was very positive around this requirement
> but as this change will make a large impact to Magnum it will need a
> spec.
> 
> On the API of things, I was thinking a slightly more generic approach
> to incorporate other lifecycle operations into the same API.
> Eg:
> magnum bay-manage  
> 
> magnum bay-manage  reset –hard
> magnum bay-manage  rebuild
> magnum bay-manage  node-delete  magnum bay-manage 
> node-add –flavor  magnum bay-manage  node-reset 
> magnum bay-manage  node-list
> 
> Tom
> 
> From: Yuanying OTSUKA 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Date: Monday, 16 May 2016 at 01:07
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Hi,
> 
> I think, user also want to specify the deleting node.
> So we should manage “node” individually.
> 
> For example:
> $ magnum node-create —bay …
> $ magnum node-list —bay
> $ magnum node-delete $NODE_UUID
> 
> Anyway, if magnum want to manage a lifecycle of container
> infrastructure.
> This feature is necessary.
> 
> Thanks
> -yuanying
> 
> 
> 2016年5月16日(月) 7:50 Hongbin Lu
> >:
> Hi all,
> 
> This is a continued discussion from the design summit. For recap,
> Magnum manages bay nodes by using ResourceGroup from Heat. This
> approach works but it is infeasible to manage the heterogeneity across
> bay nodes, which is a frequently demanded feature. As an example, there
> is a request to provision bay nodes across availability zones [1].
> There is another request to provision bay nodes with different set of
> flavors [2]. For the request features above, ResourceGroup won’t work
> very well.
> 
> The proposal is to remove the usage of ResourceGroup and manually
> create Heat stack for each bay nodes. For example, for creating a
> cluster with 2 masters and 3 minions, Magnum is going to manage 6 Heat
> stacks (instead of 1 big Heat stack as right now):
> * A kube cluster stack that manages the global resources
> * Two kube master stacks that manage the two master nodes
> * Three kube minion stacks that manage the three minion nodes
> 
> The proposal might require an additional API endpoint to manage nodes
> or a group of nodes. For example:
> $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 --
> availability-zone us-east-1 ….
> $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 --
> availability-zone us-east-2 …
> 
> Thoughts?
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-
> zones
> [2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-
> flavor
> 
> Best regards,
> Hongbin
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] Selectively publish logs to topics

2016-06-01 Thread Sundaram, Venkat (Venkat Sundaram (TSV))
Witek,

Thanks, I like the way you break it into two topics. I agree with you on the 
first one to make it similar to dimensions (global and per message).
On the second one, I see your point about including the topic as part of the 
resource URI. I am ok with it if others don’t see any issues there, just that 
it would be more significant (new API endpoint).

Yes, you are right about the retention part, that was mean’t only as an 
example. I removed it from the blueprint now.

Thanks,
TSV





On 5/31/16, 7:08 AM, "Witek Bedyk"  wrote:

>Hi Venkat,
>
>thank you for submitting the blueprint [1]. It covers actually two 
>topics, both of them a valuable functional extension:
>
>1) submitting additional (apart from dimensions) information with the logs
>2) specifying a specific output topic
>
>ad. 1
>I think we should keep it generic to allow the operator add any 
>information one needs. I like the idea of adding the 'attributes' 
>dictionary, but we would need it per message, not only per request (the 
>same story as we had with global and local dimensions).
>
>ad. 2
>As we want to change the target where the API writes the data, we could 
>use perhaps the path parameter for that. The request could look like:
>
>POST /v3.0/logs/topics/{kafka_topic_name}
>
>I don't think we should send 'retention' with every request, instead the 
>Kafka topic should be configured accordingly, but I understand it was 
>just an example, right?
>
>
>Cheers
>Witek
>
>
>[1] 
>https://blueprints.launchpad.net/monasca/+spec/publish-logs-to-topic-selectively
>
>
>-- 
>FUJITSU Enabling Software Technology GmbH
>Schwanthalerstr. 75a, 80336 München
>
>Telefon: +49 89 360908-547
>Telefax: +49 89 360908-8547
>COINS: 7941-6547
>
>Sitz der Gesellschaft: München
>AG München, HRB 143325
>Geschäftsführer: Dr. Yuji Takada, Hans-Dieter Gatzka, Christian Menk
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] Newton mid-cycle meetup details

2016-06-01 Thread Susanne Balle
The Watcher Newton mid-cycle developer meetup will take place in Hillsboro,
OR on July 19-21 2016


For more details see the wiki [1]


[1] https://wiki.openstack.org/wiki/Watcher_newton_mid-cycle_meetup_agenda


Regards Susanne
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [install-guide] Install guide from source

2016-06-01 Thread Andreas Jaeger

On 06/01/2016 05:21 PM, Spyros Trigazis wrote:

Hi everyone,

Is the idea of having an install-guide from source and possibly
virtualenvs still under consideration?

I'd like to share with you what we are currently doing along with
the install-guide based on the cookiecutter template.

I have created this change [1] in our project repo. Although some
commands are ugly it works in the same way on Ubuntu, Fedora,
Suse and Debian. Since this change aims Newton release, we clone
from master, when we branch will update to clone from the stable
branch.

Cheers,
Spyros

[1] https://review.openstack.org/#/c/319399/


We will not have a full from source guide - let's grow the existing one 
first before adding another variation ;). The idea was AFAIR that 
projects can install from source if there are no packages for them.


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-01 Thread John McDowall
Na/Srilatha,

Great, I am working from three repos:

https://github.com/doonhammer/networking-sfc
https://github.com/doonhammer/networking-ovn
https://github.com/doonhammer/ovs

I had an original prototype working that used an API I created. Since then, 
based on feedback from everyone I have been moving the API to the 
networking-sfc model and then supporting that API in networking-ovn and 
ovs/ovn. I have created a new driver in networking-sfc for ovn.

I am in the process of moving networking-ovn and ovs to support the sfc model. 
Basically I am intending to pass a deep copy of the port-chain (sample 
attached, sfc_dict.py) from the ovn driver in networking-sfc to networking-ovn. 
 This , as Ryan pointed out will minimize the dependancies between 
networking-sfc and networking-ovn. I have created additional schema for ovs/ovn 
(attached) that will provide the linkage between networking-ovn and ovs/ovn. I 
have the schema in ovs/ovn and I am in the process of  updating my code to 
support it.

Not sure where you guys want to jump in – but I can help in any way you need.

Regards

John

From: Na Zhu >
Date: Tuesday, May 31, 2016 at 9:02 PM
To: John McDowall 
>
Cc: Ryan Moats >, OpenStack 
Development Mailing List 
>, 
"disc...@openvswitch.org" 
>, Srilatha Tangirala 
>
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

+ Add Srilatha.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:Na Zhu/China/IBM
To:John McDowall 
>
Cc:Ryan Moats >, OpenStack 
Development Mailing List 
>, 
"disc...@openvswitch.org" 
>
Date:2016/06/01 12:01
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN



John,

Thanks.

Me and Srilatha (srila...@us.ibm.com) want to 
working together with you, I know you already did some development works.
Can you tell me what you have done and put the latest code in your private repo?
Can we work out a plan and the remaining work?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)




From:John McDowall 
>
To:Ryan Moats >
Cc:OpenStack Development Mailing List 
>, 
"disc...@openvswitch.org" 
>
Date:2016/06/01 08:58
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN
Sent by:"discuss" 
>




Ryan,

More help is always great :-). As far as who to collaborate, what ever Is 
easiest for everyone – I am pretty flexible.

Regards

John

From: Ryan Moats >
Date: Tuesday, May 31, 2016 at 1:59 PM
To: John McDowall 
>
Cc: Ben Pfaff >, 
"disc...@openvswitch.org" 
>, Justin Pettit 
>, OpenStack Development Mailing List 
>, 
Russell Bryant >
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN

John McDowall 
> wrote 
on 05/31/2016 03:21:30 PM:

> From: John McDowall 
> >
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff >, 
> "disc...@openvswitch.org"
> >, 

[openstack-dev] [docs] [install-guide] Install guide from source

2016-06-01 Thread Spyros Trigazis
Hi everyone,

Is the idea of having an install-guide from source and possibly
virtualenvs still under consideration?

I'd like to share with you what we are currently doing along with
the install-guide based on the cookiecutter template.

I have created this change [1] in our project repo. Although some
commands are ugly it works in the same way on Ubuntu, Fedora,
Suse and Debian. Since this change aims Newton release, we clone
from master, when we branch will update to clone from the stable
branch.

Cheers,
Spyros

[1] https://review.openstack.org/#/c/319399/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Newton midcycle meetup details posted in the wiki

2016-06-01 Thread Matt Riedemann
I've updated the Newton midcycle meetup details in the wiki [1]. There 
is now info on where to go, including directions from PDX airport and a 
map of the Intel campus where we'll be meeting.


There is also hotel information on the wiki.

If you're coming from outside the US and have questions about visas or 
other documents, please get a hold of me so I can pass that on to Intel.


[1] https://wiki.openstack.org/wiki/Sprints/NovaNewtonSprint

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][lbaas] Operator-facing installation guide

2016-06-01 Thread Spyros Trigazis
Hi.

I have added https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun

Regards,
Spyros


On 1 June 2016 at 16:39, Hongbin Lu  wrote:

> Hi lbaas team,
>
>
>
> I wonder if there is an operator-facing installation guide for
> neutron-lbaas. I asked that because Magnum is working on an installation
> guide [1] and neutron-lbaas is a dependency of Magnum. We want to link to
> an official lbaas guide so that our users will have a completed
> instruction. Any pointer?
>
>
>
> [1] https://review.openstack.org/#/c/319399/
>
>
>
> Best regards,
>
> Hongbin
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] looking for documentation liaison

2016-06-01 Thread Jay Faulkner

I don't love writing docs but I've spent more than my share of reading them :). 
I'm very willing to help out here as docs liason.

Thanks,
Jay Faulkner

From: Loo, Ruby 
Sent: Tuesday, May 31, 2016 10:23:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [ironic] looking for documentation liaison

Hi,

We¹re looking for a documentation liaison [1]. If you love (Œlike¹ is also 
acceptable) documentation, care that ironic has great documentation, and would 
love to volunteer, please let us know.

The position would require you to:

- attend the weekly doc team meetings [2] (or biweekly, depending on which 
times work for you), and represent ironic
- attend the weekly ironic meetings[3] and report (via the subteam reports) on 
anything that may impact ironic
- open bugs/whatever to track getting any documentation-related work done. You 
aren¹t expected to do the work yourself although please do if you¹d like!
- know the general status of ironic documentation
- see the expectations mentioned at [1]

Please let me know if you have any questions. Thanks and may the best candidate 
win ?

--ruby

[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
[2] https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting
[3] https://wiki.openstack.org/wiki/Meetings/Ironic





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Netconfig tasks changes

2016-06-01 Thread Aleksandr Didenko
Hi,

YAQL expressions support for task dependencies has been added to Nailgun
[0]. So now it's possible to fix network configuration idempotency issue
without introducing new 'netconfig' task [1]. There will be no problems
with loops in task graph in such case (tested on multiroles, worked fine).
When we deprecate role-based deployment (even emulated), then we'll be able
to remove all those additional conditions from manifests and remove
'configure_default_route' task completely. Please feel free to review and
comment the patch [1].

Regards,
Alex

[0] https://review.openstack.org/#/c/320861/
[1] https://review.openstack.org/#/c/322872/

On Wed, May 25, 2016 at 10:39 AM, Simon Pasquier 
wrote:

> Hi Adam,
> Maybe you want to look into network templates [1]? Although the
> documentation is a bit sparse, it allows you to define flexible network
> mappings.
> BR,
> Simon
> [1]
> https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates
>
> On Wed, May 25, 2016 at 10:26 AM, Adam Heczko 
> wrote:
>
>> Thanks Alex, will experiment with it once again although AFAIR it doesn't
>> solve thing I'd like to do.
>> I'll come later to you in case of any questions.
>>
>>
>> On Wed, May 25, 2016 at 10:00 AM, Aleksandr Didenko <
>> adide...@mirantis.com> wrote:
>>
>>> Hey Adam,
>>>
>>> in Fuel we have the following option (checkbox) on Network Setting tab:
>>>
>>> Assign public network to all nodes
>>> When disabled, public network will be assigned to controllers only
>>>
>>> So if you uncheck it (by default it's unchecked) then public network and
>>> 'br-ex' will exist on controllers only. Other nodes won't even have
>>> "Public" network on node interface configuration UI.
>>>
>>> Regards,
>>> Alex
>>>
>>> On Wed, May 25, 2016 at 9:43 AM, Adam Heczko 
>>> wrote:
>>>
 Hello Alex,
 I have a question about the proposed changes.
 Is it possible to introduce new vlan and associated bridge only for
 controllers?
 I think about DMZ use case and possibility to expose public IPs/VIP and
 API endpoints on controllers on a completely separate L2 network (segment
 vlan/bridge) not present on any other nodes than controllers.
 Thanks.

 On Wed, May 25, 2016 at 9:28 AM, Aleksandr Didenko <
 adide...@mirantis.com> wrote:

> Hi folks,
>
> we had to revert those changes [0] since it's impossible to propery
> handle two different netconfig tasks for multi-role nodes. So everything
> stays as it was before - we have single task 'netconfig' to configure
> network for all roles and you don't need to change anything in your
> plugins. Sorry for inconvenience.
>
> Our current plan for fixing network idempotency is to keep one task
> but change 'cross-depends' parameter to yaql_exp. This will allow us to 
> use
> single 'netconfig' task for all roles but at the same time we'll be able 
> to
> properly order it: netconfig on non-controllers will be executed only
> aftetr 'virtual_ips' task.
>
> Regards,
> Alex
>
> [0] https://review.openstack.org/#/c/320530/
>
>
> On Thu, May 19, 2016 at 2:36 PM, Aleksandr Didenko <
> adide...@mirantis.com> wrote:
>
>> Hi all,
>>
>> please be aware that now we have two netconfig tasks (in Fuel 9.0+):
>>
>> - netconfig-controller - executed on controllers only
>> - netconfig - executed on all other nodes
>>
>> puppet manifest is the same, but tasks are different. We had to do
>> this [0] in order to fix network idempotency issues [1].
>>
>> So if you have 'netconfig' requirements in your plugin's tasks,
>> please make sure to add 'netconfig-controller' as well, to work properly 
>> on
>> controllers.
>>
>> Regards,
>> Alex
>>
>> [0] https://bugs.launchpad.net/fuel/+bug/1541309
>> [1]
>> https://review.openstack.org/#/q/I229957b60c85ed94c2d0ba829642dd6e465e9eca,n,z
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 --
 Adam Heczko
 Security Engineer @ Mirantis Inc.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> 

Re: [openstack-dev] [Nova] State machines in Nova

2016-06-01 Thread Joshua Harlow
Sounds similar to https://review.openstack.org/#/c/224022/ (which is the 
ironic version of expose state machine transitions over the REST API); 
probably useful to read over the review commentary there and/or talk to 
the ironic folks about that before doing much here (to learn some of the 
pros/cons and such).


Andrew Laski wrote:


On Wed, Jun 1, 2016, at 05:51 AM, Miles Gould wrote:

On 31/05/16 21:03, Timofei Durakov wrote:

there is blueprint[1] that was approved during Liberty and resubmitted
to Newton(with spec[2]).
The idea is to define state machines for operations as live-migration,
resize, etc. and to deal with them operation states.

+1 to introducing an explicit state machine - IME they make complex
logic much easier to reason about. However, think carefully about how
you'll make changes to that state machine later. In Ironic, this is an
ongoing problem: every time we change the state machine, we have to
decide whether to lie to older clients (and if so, what lie to tell
them), or whether to present them with the truth (and if so, how badly
they'll break). AIUI this would be a much smaller problem if we'd
considered this possibility carefully at the beginning.


This is a great point. I think most people have an implicit assumption
that the state machine will be exposed to end users via the API. I would
like to avoid that for exactly the reason you've mentioned. Of course
we'll want to expose something to users but whatever that is should be
loosely coupled with the internal states that actually drive the system.



Miles

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][lbaas] Operator-facing installation guide

2016-06-01 Thread Hongbin Lu
Hi lbaas team,

I wonder if there is an operator-facing installation guide for neutron-lbaas. I 
asked that because Magnum is working on an installation guide [1] and 
neutron-lbaas is a dependency of Magnum. We want to link to an official lbaas 
guide so that our users will have a completed instruction. Any pointer?

[1] https://review.openstack.org/#/c/319399/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [stable] Proposal to add Ian Cordasco to glance-stable-maint

2016-06-01 Thread Matt Riedemann

On 5/25/2016 4:12 PM, Nikhil Komawar wrote:

Hi all,


I would like to propose adding Ian to glance-stable-maint team. The
interest is coming from him and I've already asked for feedback from the
current glance-stable-maint folks, which has been in Ian's favor. Also,
as Ian mentions the current global stable team isn't going to subsume
the per-project teams anytime soon.


Ian is willing to shoulder the responsibility of stable liaison for
Glance [1] which is great news. If no objections are raised by Friday
May 27th 2359UTC, we will go ahead and do the respective changes.


[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch




Post-vacation +1 to show my support. I've always appreciated Ian's 
support on helping to sort out issues in the gate and am comfortable 
with his knowledge on the stable review policy.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Correct wrong nomenclature in the Networks section of Fuel-ui

2016-06-01 Thread Giuseppe Cossu
Hi,
I'll try to explain it better.
As reported in the Fuel documentation "Internal networks include Storage,
Management, and Admin (PXE) Fuel networks. Internal network connects all
OpenStack nodes within an OpenStack environment. All components of an
OpenStack environment communicate with each other using internal networks".

In fuel-ui, tab "Networks" section "Neutron L3" you are going to define the
parameters to the admin tenant network. Basically Fuel creates a private
(aka tenant) network for the admin tenant with those parameter.

For that reason I think that the usage of internal network in the fuel-ui
is wrong (for sure the related description), because the user is not going
to configure a network that "... connects all OpenStack nodes in the
environment."

Regards,
Giuseppe


On Wed, Jun 1, 2016 at 3:54 PM, Neil Jerram  wrote:

> As I just commented in the review, I think the wording was clearer as it
> was before.  So would prefer if you do not use this as a model for further
> similar changes.
>
>Neil
>
>
> On Wed, Jun 1, 2016 at 2:06 PM Giuseppe Cossu <
> giuseppe.co...@create-net.org> wrote:
>
>> Hi folks,
>> I submitted a bug and the related fix about a wrong nomenclatura in
>> fuel-ui.
>> The fix is merged but I think we need to improve other things such as the
>> translations of the same content and the related parameters (e.g., replace
>> "internal_cidr" with "admin_tenant_cidr") inside the fuel-ui and fuel-web
>> repositories.
>>
>> If we proceed in that way I suppose that is required to coordinate the
>> activities in multiple fuel repositories. What do you think?
>>
>> https://review.openstack.org/#/c/322050/
>> https://bugs.launchpad.net/fuel/+bug/1586332
>>
>> Regards,
>> Giuseppe
>> --
>> 
>> Giuseppe Cossu
>> Distributed Computing and Information Processing (DISCO)
>> Cloud Engineer
>> CREATE-NET  Research Center
>> Via alla Cascata 56/D - 38123 Povo, Trento (Italy)
>> e-mail: giuseppe.co...@create-net.org
>> www.create-net.org
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Giuseppe Cossu
Distributed Computing and Information Processing (DISCO)
Cloud Engineer
CREATE-NET  Research Center
Via alla Cascata 56/D - 38123 Povo, Trento (Italy)
e-mail: giuseppe.co...@create-net.org
Tel: (+39) 0461312428
www.create-net.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][designate] designate-tempest-plugin 0.1.0 release

2016-06-01 Thread no-reply
We are thrilled to announce the release of:

designate-tempest-plugin 0.1.0: OpenStack DNS As A Service (Designate)
Functional Tests

This is the first release of designate-tempest-plugin.

With package available at:

https://pypi.python.org/pypi/designate-tempest-plugin

For more details, please see below.

Changes in designate-tempest-plugin 
21f4f8a01a0c343e157cfd69d80aadfdd592ec5a..0.1.0
---

4055b81 Add data driven RecordSet tests
0b12257 Increase default build timeout to 120sec
0127579 Replace assertEqual(a>b) with  assertGreater(a, b)
8aaa574 Skip nameserver propagation tests when no NS's in conf
53dbdbb Add client's method and test cases for /v2/recordsets API
2a8b529 Ensure V1 Records tests calls parent teardown
dddb499 Correctly parse IP:Port nameserver pairs
46cd508 Updated from global requirements
ea0ba08 Add zones ownership transfer accept to Designate tempest plugin
e9785c9 Add zones ownership transfer request to Designate tempest plugin
cf98c26 Add a client for querying nameservers
bbba362 Remove unintended comment
8f53f21 Updated from global requirements
8ae796c Port V1 Tempest test from designate's contrib folder
70dc6ec expected_success should be a classmethod
a3ab50c Add a config for a minimum ttl
2de01be Add tld_client's methods and tests to Designate tempest plugin
4beb93c Add pool_client's methods and tests to Designate tempest plugin
de24d96 Add recordset_client's methods and tests to Designate tempest plugin
c8f7a70 Add zones_export_client's methods and tests to Designate tempest plugin
89edc11 Add quotas client + tests, for the admin api
8abae33 Add blacklist client + smoke tests
aec952a Add zones_import_client's methods and tests to Designate tempest plugin
471df92 Assert with integer status codes to avoid hidden errors
d8471de Move assertExpected fucntion to base class
6db1c01 Several test cleanups
f2ac465 Add a test for deleting pending zones
560c89b API tests should be fast, Scenario tests slow
fef36a8 Adds zone client's methods and tests to Designate tempest plugin
25fb29e Initial layout of Designate tempest plugin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Ansible 2.0.0 functional

2016-06-01 Thread David Shrewsbury
I believe 2.1 was when Toshio introduced the new ziploader, which totally
changed how
the modules were loaded:

https://github.com/ansible/ansible/pull/15246

That broke a few of the 2.x OpenStack modules, too. But that was mostly our
fault as
we didn't code some of them correctly to the proper standards.

-Dave


On Wed, Jun 1, 2016 at 6:10 AM, Jeffrey Zhang 
wrote:

> 1. the ansible 2.1 make lots of change compare to the ansible 2.0 about how
>the plugin works. So in default, kolla do not work with ansible 2.1
>
> 2. the compatible fix is merged here[0]. So the kolla works on both
>ansible 2.1 and ansible 2.0
>
> [0] https://review.openstack.org/321754
>
> On Wed, Jun 1, 2016 at 2:46 PM, Monty Taylor  wrote:
>
>> On 06/01/2016 08:44 AM, Joshua Harlow wrote:
>> > Out of curiosity, what keeps on changing (breaking?) in ansible that
>> > makes it so that something working in 2.0 doesn't work in 2.1? Isn't the
>> > point of minor version numbers like that so that things in the same
>> > major version number still actually work...
>>
>> I'm also curious to know the answer to this. I expect the 2.0 port to
>> have had the possibility of breaking things - I do not expect 2.0 to 2.1
>> to be similar. Which is not to say you're wrong about it not working,
>> but rather, it would be good to understand what broke so that we can
>> better track it in upstream ansible.
>>
>> > Steven Dake (stdake) wrote:
>> >> Hey folks,
>> >>
>> >> In case you haven't been watching the review queue, Kolla has been
>> >> ported to Ansible 2.0. It does not work with Ansible 2.1, however.
>> >>
>> >> Regards,
>> >> -steve
>> >>
>> >>
>> __
>> >>
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
David Shrewsbury (Shrews)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Correct wrong nomenclature in the Networks section of Fuel-ui

2016-06-01 Thread Neil Jerram
As I just commented in the review, I think the wording was clearer as it
was before.  So would prefer if you do not use this as a model for further
similar changes.

   Neil


On Wed, Jun 1, 2016 at 2:06 PM Giuseppe Cossu 
wrote:

> Hi folks,
> I submitted a bug and the related fix about a wrong nomenclatura in
> fuel-ui.
> The fix is merged but I think we need to improve other things such as the
> translations of the same content and the related parameters (e.g., replace
> "internal_cidr" with "admin_tenant_cidr") inside the fuel-ui and fuel-web
> repositories.
>
> If we proceed in that way I suppose that is required to coordinate the
> activities in multiple fuel repositories. What do you think?
>
> https://review.openstack.org/#/c/322050/
> https://bugs.launchpad.net/fuel/+bug/1586332
>
> Regards,
> Giuseppe
> --
> 
> Giuseppe Cossu
> Distributed Computing and Information Processing (DISCO)
> Cloud Engineer
> CREATE-NET  Research Center
> Via alla Cascata 56/D - 38123 Povo, Trento (Italy)
> e-mail: giuseppe.co...@create-net.org
> www.create-net.org
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Reminder: Trove meeting this afternoon

2016-06-01 Thread Amrith Kumar
A reminder to everyone that there's a Trove meeting scheduled for 2pm Eastern 
Time today.

Currently there is nothing on the agenda. If there's nothing on the agenda by ~ 
60m before the meeting, I propose that we cancel the meeting.

Agenda is at https://wiki.openstack.org/wiki/Meetings/TroveMeeting

Please update if you have something you'd like to discuss.

Thanks,

-amrith

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Correct wrong nomenclature in the Networks section of Fuel-ui

2016-06-01 Thread Giuseppe Cossu
Hi folks,
I submitted a bug and the related fix about a wrong nomenclatura in
fuel-ui.
The fix is merged but I think we need to improve other things such as the
translations of the same content and the related parameters (e.g., replace
"internal_cidr" with "admin_tenant_cidr") inside the fuel-ui and fuel-web
repositories.

If we proceed in that way I suppose that is required to coordinate the
activities in multiple fuel repositories. What do you think?

https://review.openstack.org/#/c/322050/
https://bugs.launchpad.net/fuel/+bug/1586332

Regards,
Giuseppe
-- 

Giuseppe Cossu
Distributed Computing and Information Processing (DISCO)
Cloud Engineer
CREATE-NET  Research Center
Via alla Cascata 56/D - 38123 Povo, Trento (Italy)
e-mail: giuseppe.co...@create-net.org
www.create-net.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Discussion of PuppetOpenstack Project abbreviation

2016-06-01 Thread Dmitry Tantsur

On 06/01/2016 02:20 PM, Jason Guiditta wrote:

On 01/06/16 18:49 +0800, Xingchao Yu wrote:

  Hi, everyone:

  Do we need to give a abbreviation for PuppetOpenstack project? B/C
  it's really a long words when I introduce this project to people or
  writng article about it.

  How about POM(PuppetOpenstack Modules) or POP(PuppetOpenstack
  Project) ?

  I would like +1 for POM.
  Just an idea, please feel free to give your comment :D
  Xingchao Yu


For rdo and osp, we package it as 'openstack-puppet-modules', or OPM
for short.


I definitely love POM as it reminds me of pomeranians :) but I agree 
that OPM will probably be easier recognizable.




-j

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] State machines in Nova

2016-06-01 Thread Monty Taylor
On 06/01/2016 03:50 PM, Andrew Laski wrote:
> 
> 
> On Wed, Jun 1, 2016, at 05:51 AM, Miles Gould wrote:
>> On 31/05/16 21:03, Timofei Durakov wrote:
>>> there is blueprint[1] that was approved during Liberty and resubmitted
>>> to Newton(with spec[2]).
>>> The idea is to define state machines for operations as live-migration,
>>> resize, etc. and to deal with them operation states.
>>
>> +1 to introducing an explicit state machine - IME they make complex 
>> logic much easier to reason about. However, think carefully about how 
>> you'll make changes to that state machine later. In Ironic, this is an 
>> ongoing problem: every time we change the state machine, we have to 
>> decide whether to lie to older clients (and if so, what lie to tell 
>> them), or whether to present them with the truth (and if so, how badly 
>> they'll break). AIUI this would be a much smaller problem if we'd 
>> considered this possibility carefully at the beginning.
> 
> This is a great point. I think most people have an implicit assumption
> that the state machine will be exposed to end users via the API. I would
> like to avoid that for exactly the reason you've mentioned. Of course
> we'll want to expose something to users but whatever that is should be
> loosely coupled with the internal states that actually drive the system. 

+1billion


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] State machines in Nova

2016-06-01 Thread Andrew Laski
 
 
 
On Wed, Jun 1, 2016, at 06:06 AM, Timofei Durakov wrote:
> From my sight, I concerned proposed transition from option #1 to
> option #2.
> because it would be quite big change. So I wonder, has any
> component team
> implemented such transition. Open questions:
>  * upgrades story potential issues
>  * dealing with clients(?)
>  * promoting state machine from verification of states to conductor of
>the task(success stories)
 
I would also be interested in hearing post mortems from projects that
have done this.
 
It would be a big change to transition from #1 to #2 but I don't think
there's any less work involved to just do #2 first. Formalizing the
states we want and adding logic around that has to take place in either
option. I see option 1 as a small chunk of option 2, not an alternative.
 
 
> Timofey
>
> On Wed, Jun 1, 2016 at 12:51 PM, Miles Gould
>  wrote:
>> On 31/05/16 21:03, Timofei Durakov wrote:
>>> there is blueprint[1] that was approved during Liberty and
>>> resubmitted to Newton(with spec[2]). The idea is to define state
>>> machines for operations as live-migration, resize, etc. and to deal
>>> with them operation states.
>>
>> +1 to introducing an explicit state machine - IME they make complex
>> logic much easier to reason about. However, think carefully about how
>> you'll make changes to that state machine later. In Ironic, this is
>> an ongoing problem: every time we change the state machine, we have
>> to decide whether to lie to older clients (and if so, what lie to
>> tell them), or whether to present them with the truth (and if so, how
>> badly they'll break). AIUI this would be a much smaller problem if
>> we'd considered this possibility carefully at the beginning.
>>
>>  Miles
>>
>>
>> ___-
>> ___
>>  OpenStack Development Mailing List (not for usage questions)
>>  Unsubscribe: OpenStack-dev-
>>  requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> -
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] State machines in Nova

2016-06-01 Thread Andrew Laski


On Wed, Jun 1, 2016, at 05:51 AM, Miles Gould wrote:
> On 31/05/16 21:03, Timofei Durakov wrote:
> > there is blueprint[1] that was approved during Liberty and resubmitted
> > to Newton(with spec[2]).
> > The idea is to define state machines for operations as live-migration,
> > resize, etc. and to deal with them operation states.
> 
> +1 to introducing an explicit state machine - IME they make complex 
> logic much easier to reason about. However, think carefully about how 
> you'll make changes to that state machine later. In Ironic, this is an 
> ongoing problem: every time we change the state machine, we have to 
> decide whether to lie to older clients (and if so, what lie to tell 
> them), or whether to present them with the truth (and if so, how badly 
> they'll break). AIUI this would be a much smaller problem if we'd 
> considered this possibility carefully at the beginning.

This is a great point. I think most people have an implicit assumption
that the state machine will be exposed to end users via the API. I would
like to avoid that for exactly the reason you've mentioned. Of course
we'll want to expose something to users but whatever that is should be
loosely coupled with the internal states that actually drive the system. 


> 
> Miles
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Discussion of PuppetOpenstack Project abbreviation

2016-06-01 Thread Jason Guiditta

On 01/06/16 18:49 +0800, Xingchao Yu wrote:

  Hi, everyone:

  Do we need to give a abbreviation for PuppetOpenstack project? B/C
  it's really a long words when I introduce this project to people or
  writng article about it.

  How about POM(PuppetOpenstack Modules) or POP(PuppetOpenstack
  Project) ?

  I would like +1 for POM.
  Just an idea, please feel free to give your comment :D
  Xingchao Yu


For rdo and osp, we package it as 'openstack-puppet-modules', or OPM
for short.

-j

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] looking for documentation liaison

2016-06-01 Thread Vladyslav Drok
Hi,

I "like" documentation, so if you won't find anyone that "loves" it,
consider me a candidate :)

- Vlad

On Tue, May 31, 2016 at 8:23 PM, Loo, Ruby  wrote:

> Hi,
>
> We¹re looking for a documentation liaison [1]. If you love (Œlike¹ is also
> acceptable) documentation, care that ironic has great documentation, and
> would love to volunteer, please let us know.
>
> The position would require you to:
>
> - attend the weekly doc team meetings [2] (or biweekly, depending on which
> times work for you), and represent ironic
> - attend the weekly ironic meetings[3] and report (via the subteam
> reports) on anything that may impact ironic
> - open bugs/whatever to track getting any documentation-related work done.
> You aren¹t expected to do the work yourself although please do if you¹d
> like!
> - know the general status of ironic documentation
> - see the expectations mentioned at [1]
>
> Please let me know if you have any questions. Thanks and may the best
> candidate win ?
>
> --ruby
>
> [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
> [2] https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting
> [3] https://wiki.openstack.org/wiki/Meetings/Ironic
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [oslo] Template to follow for policy support?

2016-06-01 Thread Mathieu Mitchell

Hi Jay,

Posted here because you're probably sleeping now ;)

On 2016-05-31 7:01 PM, Jay Faulkner wrote:

Hi all,


During this cycle, on behalf of OSIC, I'll be working on implementing proper 
oslo.policy support for Ironic. The reasons this is needed probably don't need 
to be explained here, so I won't :).


I have two requests for the list regarding this though:


1) Is there a general guideline to follow when designing policy roles? There 
appears to have been some discussion around this already here: 
https://review.openstack.org/#/c/245629/, but it hasn't moved in over a month. 
I want Ironic's implementation of policy to be as 'standard' as possible; but 
I've had trouble finding any kind of standard.



I think the link you posted is specifically for standardizing what it 
should be. All of this seems very new, I suspect we will see guidelines 
appearing when common-default-policy lands.




2) A general call for contributors to help make this happen in Ironic. I want, 
in the next week, to finish up the research and start on a spec. Anyone willing 
to help with the design or implementation let me know here or in IRC so we can 
work together.



Willing to help with both.


Mathieu



Thanks in advance,

Jay Faulkner


P.S. Yes, I am aware of 
http://specs.openstack.org/openstack/oslo-specs/specs/newton/policy-in-code.html
 and will ensure whatever Ironic does follows this specification.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monitoring-for-openstack] Project retiring

2016-06-01 Thread Andreas Jaeger
On 2016-06-01 13:26, Monty Taylor wrote:
> On 06/01/2016 12:24 PM, Martin Magr wrote:
>> Greetings,
>>
>>  due to the import of project monitoring-for-openstack to project
>> osops-tools-monitoring [1] the imported project will be retired. I've
>> started with the process today [2] and expect to finish the retiring
>> within a week.
>>
>> Regards,
>> Martin
>>
>> [1] https://review.openstack.org/248352
>> [2] https://review.openstack.org/323751
> 
> Might I suggest that before we land the change to remove the jenkins
> jobs, we land a commit to the repo that replaces all of the content with
> a README file telling people that the repo is no longer used and that
> people should look to osops-tools-monitoring? The
> monitoring-for-openstack repo will not go away, so I'd hate for someone
> to find it and get confused.

That's step 2 - see
http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

1)  https://review.openstack.org/323751
2)  Remove content as you suggest above
3)  Make repository read-only

We need step 1 so that step 2 can merge - with the current jobs running,
we cannot remove the content,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monitoring-for-openstack] Project retiring

2016-06-01 Thread Monty Taylor
On 06/01/2016 12:24 PM, Martin Magr wrote:
> Greetings,
> 
>  due to the import of project monitoring-for-openstack to project
> osops-tools-monitoring [1] the imported project will be retired. I've
> started with the process today [2] and expect to finish the retiring
> within a week.
> 
> Regards,
> Martin
> 
> [1] https://review.openstack.org/248352
> [2] https://review.openstack.org/323751

Might I suggest that before we land the change to remove the jenkins
jobs, we land a commit to the repo that replaces all of the content with
a README file telling people that the repo is no longer used and that
people should look to osops-tools-monitoring? The
monitoring-for-openstack repo will not go away, so I'd hate for someone
to find it and get confused.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] Proposal for a mid-cycle virtual sync on operator issues

2016-06-01 Thread Belmiro Moreira
Hi,
thanks Nikhil.
My availability considering what was proposed:
2000 UTC - OK
1100 UTC - OK

On Tue, May 31, 2016 at 11:13 PM, Nikhil Komawar 
wrote:

> Hey,
>
>
> Thanks for your interest.
>
> Sorry about the confusion. Please consider the same time for Thursday
> June 9th.
>
>
> Thur June 9th proposed time:
>
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=11=0=0=881=196=47=22=157=87=24=78=283
>
>
> Alternate time proposal:
>
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=23=0=0=881=196=47=22=157=87=24=78=283
>
>
> Overall time planner:
>
> http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160609=881=196=47=22=157=87=24=78=283
>
>
>
> It will really depend on who is strongly interested in the discussions.
> Scheduling with EMEA, Pacific time (US), Australian (esp. Eastern) is
> quite difficult. If there's strong interest from San Jose, we may have
> to settle for a rather awkward choice below:
>
>
>
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=9=4=0=0=881=196=47=22=157=87=24=78=283
>
>
>
> A vote of +1, 0, -1 on these times would help long way.
>
>
> On 5/31/16 4:35 PM, Belmiro Moreira wrote:
> > Hi Nikhil,
> > I'm interested in this discussion.
> >
> > Initially you were proposing Thursday June 9th, 2016 at 2000UTC.
> > Are you suggesting to change also the date? Because in the new
> > timeanddate suggestions is 6/7 of June.
> >
> > Belmiro
> >
> > On Tue, May 31, 2016 at 6:13 PM, Nikhil Komawar  > > wrote:
> >
> > Hey,
> >
> >
> >
> >
> >
> > Thanks for the feedback. 0800UTC is 4am EDT for some of the US
> > Glancers :-)
> >
> >
> >
> >
> >
> > I request this time which may help the folks in Eastern and Central
> US
> >
> > time.
> >
> >
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=7=11=0=0=881=196=47=22=157=87=24=78
> >
> >
> >
> >
> >
> > If it still does not work, I may have to poll the folks in EMEA on
> how
> >
> > strong their intentions are for joining this call.  Because
> > another time
> >
> > slot that works for folks in Australia & US might be too inconvenient
> >
> > for those in EMEA:
> >
> >
> http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016=6=6=23=0=0=881=196=47=22=157=87=24=78
> >
> >
> >
> >
> >
> > Here's the map of cities that may be involved:
> >
> >
> http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160607=881=196=47=22=157=87=24=78
> >
> >
> >
> >
> >
> > Please let me know which ones are possible and we can try to work
> > around
> >
> > the times.
> >
> >
> >
> >
> >
> > On 5/31/16 2:54 AM, Blair Bethwaite wrote:
> >
> > > Hi Nikhil,
> >
> > >
> >
> > > 2000UTC might catch a few kiwis, but it's 6am everywhere on the
> east
> >
> > > coast of Australia, and even earlier out west. 0800UTC, on the
> other
> >
> > > hand, would be more sociable.
> >
> > >
> >
> > > On 26 May 2016 at 15:30, Nikhil Komawar  > > wrote:
> >
> > >> Thanks Sam. We purposefully chose that time to accommodate some
> > of our
> >
> > >> community members from the Pacific. I'm assuming it's just your
> > case
> >
> > >> that's not working out for that time? So, hopefully other
> > Australian/NZ
> >
> > >> friends can join.
> >
> > >>
> >
> > >>
> >
> > >> On 5/26/16 12:59 AM, Sam Morrison wrote:
> >
> > >>> I’m hoping some people from the Large Deployment Team can come
> > along. It’s not a good time for me in Australia but hoping someone
> > else can join in.
> >
> > >>>
> >
> > >>> Sam
> >
> > >>>
> >
> > >>>
> >
> >  On 26 May 2016, at 2:16 AM, Nikhil Komawar
> > > wrote:
> >
> > 
> >
> >  Hello,
> >
> > 
> >
> > 
> >
> >  Firstly, I would like to thank Fei Long for bringing up a few
> > operator
> >
> >  centric issues to the Glance team. After chatting with him on
> > IRC, we
> >
> >  realized that there may be more operators who would want to
> > contribute
> >
> >  to the discussions to help us take some informed decisions.
> >
> > 
> >
> > 
> >
> >  So, I would like to call for a 2 hour sync for the Glance
> > team along
> >
> >  with interested operators on Thursday June 9th, 2016 at 2000UTC.
> >
> > 
> >
> > 
> >
> >  If you are interested in participating please RSVP here [1], and
> >
> >  participate in the poll for the tool you'd prefer. I've also
> > added a
> >
> >  section for Topics and provided a template to document the
> > issues clearly.
> >
> > 
> >
> > 
> >
> >  Please be mindful of everyone's time and if 

[openstack-dev] [puppet] Discussion of PuppetOpenstack Project abbreviation

2016-06-01 Thread Xingchao Yu
Hi, everyone:

Do we need to give a abbreviation for PuppetOpenstack project? B/C it's
really a long words when I introduce this project to people or writng
article about it.

How about POM(PuppetOpenstack Modules) or POP(PuppetOpenstack Project) ?

I would like +1 for POM.

Just an idea, please feel free to give your comment :D




Xingchao Yu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-06-01 Thread Kumari, Madhuri
Thanks Shu for providing suggestions.

I wanted the new name to be related to containers as Magnum is also synonym for 
containers. So I have few options here.

1. Casket
2. Canister
3. Cistern
4. Hutch

All above options are free to be taken on pypi and Launchpad.
Thoughts?

Regards
Madhuri

-Original Message-
From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com] 
Sent: Wednesday, June 1, 2016 11:11 AM
To: openstack-dev@lists.openstack.org
Cc: Haruhiko Katou 
Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?

I found container related names and checked whether other project uses.

https://en.wikipedia.org/wiki/Straddle_carrier
https://en.wikipedia.org/wiki/Suezmax
https://en.wikipedia.org/wiki/Twistlock

These words are not used by other project on PYPI and Launchpad.

ex.)
https://pypi.python.org/pypi/straddle
https://launchpad.net/straddle


However the chance of renaming in N cycle will be done by Infra-team on this 
Friday, we would not meet the deadline. So

1. use 'Higgins' ('python-higgins' for package name) 2. consider other name for 
next renaming chance (after a half year)

Thoughts?


Regards,
Shu


> -Original Message-
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Wednesday, June 01, 2016 11:37 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
> Shu,
> 
> According to the feedback from the last team meeting, Gatling doesn't 
> seem to be a suitable name. Are you able to find an alternative name?
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> > Sent: May-24-16 4:30 AM
> > To: openstack-dev@lists.openstack.org
> > Cc: Haruhiko Katou
> > Subject: [openstack-dev] [higgins] Should we rename "Higgins"?
> >
> > Hi all,
> >
> > Unfortunately "higgins" is used by media server project on Launchpad 
> > and CI software on PYPI. Now, we use "python-higgins" for our 
> > project on Launchpad.
> >
> > IMO, we should rename project to prevent increasing points to patch.
> >
> > How about "Gatling"? It's only association from Magnum. It's not 
> > used on both Launchpad and PYPI.
> > Is there any idea?
> >
> > Renaming opportunity will come (it seems only twice in a year) on 
> > Friday, June 3rd. Few projects will rename on this date.
> > http://markmail.org/thread/ia3o3vz7mzmjxmcx
> >
> > And if project name issue will be fixed, I'd like to propose UI 
> > subproject.
> >
> > Thanks,
> > Shu
> >
> >
> >
> __
> _
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Storing deployment configuration before or after a successful deployment

2016-06-01 Thread Bulat Gaifullin

IMO: 
Because we do not have the versioning for all settings, the discard button 
shall reset to last deployed state(in case if last deployed state exists, 
otherwise the discard button should not be available).
Now the availability of discard button calculates according to state of 
cluster, but this is not correct way, because the first deployment may fail, 
the cluster will be in ‘error state’, but it does not have last successfully 
deployed configuration.

Regards,at
Bulat Gaifullin
Mirantis Inc.



> On 25 May 2016, at 15:05, Roman Prykhodchenko  wrote:
> 
> Folks,
> 
> Recently we were investigating an issue [1] when a user configured a cluster 
> to cause deployment to fail and then expected a discard button will allow to 
> reset changes made after that failure. As Julia mentioned in her comment on 
> the bug, basically what we’ve got is that users actually perceive the meaning 
> of a cluster.deployed attribute as a snapshot to a latest deployment 
> configuration while it was designed to keep the latest configuration of a 
> successful deployment. Should we re-consider the meaning of that attribute 
> and therefore features and the action of the Discard button?
> 
> 
> References:
> 
> 1. https://bugs.launchpad.net/fuel/+bug/1584681
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] State machines in Nova

2016-06-01 Thread Murray, Paul (HP Cloud)


> -Original Message-
> From: Andrew Laski [mailto:and...@lascii.com]
> Sent: 31 May 2016 22:34
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] State machines in Nova
> 
> 
> 
> On Tue, May 31, 2016, at 04:26 PM, Joshua Harlow wrote:
> > Timofei Durakov wrote:
> > > Hi team,
> > >
> > > there is blueprint[1] that was approved during Liberty and
> > > resubmitted to Newton(with spec[2]).

Do you mean the blueprint was approved - I think it was agreed there was no 
need for a spec then. There is a spec now because there is a minor API impact 
(if state names are changed to be cConsistent across types of migration).



> > > The idea is to define state machines for operations as
> > > live-migration, resize, etc. and to deal with them operation states.
> > > The spec PoC patches are overall good. At the same time I think is
> > > will be good to get agreement on the usage of state-machines in Nova.
> > > There are 2 options:
> > >
> > >   * implement proposed change and use state machines to deal with
> states
> > > only
> >
> > I think this is what could be called the ironic equivalent correct?
> >
> > In ironic @
> >
> https://github.com/openstack/ironic/blob/master/ironic/common/states.p
> > y the state machine here is used to ensure proper states are
> > transitioned over and no invalid/unexpected state transitions happen.
> > The code though itself still runs in a implicit fashion and afaik only
> > interacts with the state machine as a side-effect of actions occurring
> > (instead of the reverse where the state machine itself is 'driving'
> > those actions to happen/to completion).
> 
> Yes. This exists in a limited form already in Nova for instances and
> task_states.
> 
> >
> > >   o procs:
> > >   + could be implemented/merged right now
> > >   + cleans up states for migrations
> > >   o cons:
> > >   + state machine only deal with states, and it will be hard to
> > > build on top of it task API, as bp [1] was designed for
> > > another thing.
> > >
> > >   * use state machines in Task API(which I'm going to work on during
> > > next release):
> >
> > So this would be the second model described above, where the state
> > machine (or set of state machines) itself (together could be formed
> > into a action plan, or action workflow or ...) would be the 'entity'
> > realizing a given action and ensuring that it is performed until
> > completed (or tracking where it was paused and such); is that correct?
> >
> > >   o procs:
> > >   + Task API will orchestrate and deal with long running tasks
> > >   + usage state-machines could help with actions
> > > rollbacks/retries/etc.
> > >   o cons:
> > >   + big amount of work
> > >   + requires time.
> > >
> > > I'd like to discuss these options in this thread.
> >
> > It seems like one could progress from the first model to the second
> > one, although that kind of progression would still be large (because
> > if my understanding is correct the control of who runs what has to be
> > given over to something else in the second model, similar to the
> > control a taskflow engine or mistral engine has over what it runs);
> > said control means that certain programming models may not map so well
> > (from what I have seen).
> 
> I think working through this as a progression from the first model to the
> second one would be the best plan. Start with formalizing the states and
> their allowed transitions and add checking and error handling around that.
> Then work towards handing off control to an engine that could drive the
> operation.
> 

I'm inclined to agree with Andrew. I don't see this as inconsistent with the 
tasks API approach although it may become redundant. The spec is actually 
pretty simple so why make it wait for something that may exist in the future.


> > >
> > > [1] -
> > > https://blueprints.launchpad.net/openstack/?searchtext=migration-sta
> > > te-machine [2] - https://review.openstack.org/#/c/320849/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Ansible 2.0.0 functional

2016-06-01 Thread Jeffrey Zhang
1. the ansible 2.1 make lots of change compare to the ansible 2.0 about how
   the plugin works. So in default, kolla do not work with ansible 2.1

2. the compatible fix is merged here[0]. So the kolla works on both
   ansible 2.1 and ansible 2.0

[0] https://review.openstack.org/321754

On Wed, Jun 1, 2016 at 2:46 PM, Monty Taylor  wrote:

> On 06/01/2016 08:44 AM, Joshua Harlow wrote:
> > Out of curiosity, what keeps on changing (breaking?) in ansible that
> > makes it so that something working in 2.0 doesn't work in 2.1? Isn't the
> > point of minor version numbers like that so that things in the same
> > major version number still actually work...
>
> I'm also curious to know the answer to this. I expect the 2.0 port to
> have had the possibility of breaking things - I do not expect 2.0 to 2.1
> to be similar. Which is not to say you're wrong about it not working,
> but rather, it would be good to understand what broke so that we can
> better track it in upstream ansible.
>
> > Steven Dake (stdake) wrote:
> >> Hey folks,
> >>
> >> In case you haven't been watching the review queue, Kolla has been
> >> ported to Ansible 2.0. It does not work with Ansible 2.1, however.
> >>
> >> Regards,
> >> -steve
> >>
> >>
> __
> >>
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >