[openstack-dev] [election] [tc] TC Candidacy

2017-04-07 Thread Adrian Otto
I am announcing my candidacy [1] for a seat on the OpenStack Technical 
Committee. I've had the honor of working on and with OpenStack since it was 
conceived. Since our first Design Summit on July 13-16 2010, I have been 
thrilled to be a part of our movement together. We have come a long way since 
then, from a hopeful open source project with our friends at Rackspace and NASA 
to a thriving global community that has set the definitive standard for cloud 
software.

Over the past seven years, I have viewed OpenStack as my primary pursuit. I 
love our community, and the way we embrace the “four opens”. Along this 
journey, I have done my very best to push the limits of our innovative spirit, 
and pursue new and exciting ways of using software defined systems to make 
possible what we could have only imagined when we started. I have served our 
community as an innovator and as a PTL for the good part of the past 5 years. I 
served as the founding PTL of OpenStack Solum, and pivoted to become the 
founding PTL of OpenStack Magnum. I currently serve in this role today. Each of 
these project pursuits were aimed at making OpenStack technology more easily 
automated, more efficient, and combining it with cutting edge new technologies.

I am now ready to embark on a wider mission. I’m prepared to transition 
leadership of Magnum to my esteemed team members, and pursue a role with the 
OpenStack TC to have an even more profound impact on our future success. I have 
a unique perspective on OpenStack governance, by repeatedly using our various 
processes and applying our rules, guidelines, and values as they have evolved 
to I deeply respect the OpenStack community, our TC, and their respective 
membership. I look forward to serving in an expanded role, and helping to make 
OpenStack even better than it is today.

I will support efforts to:

1) Make OpenStack a leading platform for running the next generation of cloud 
native applications. This means making sensible and secure ways of allowing our 
data plane and control plane systems to integrate. For example, OpenStack 
should be just as good at running container workloads as it is for running bare 
metal and virtualized ones. Our applications should be able to self heal, 
scale, and dynamically interact with our clouds in a way that’s
safe and effective.

2) Expand our support for languages beyond Python. Over the past year, our TC 
has taken productive steps in this direction, and I would like to further 
advance this work so that we can introduce software written in other languages, 
such as Golang, in a way that’s supportable and appropriate for our community’s 
growing needs.

3) Advocate for inclusivity and diversity, not only for software languages but 
for contributors from all corners of the Earth. I feel it’s important to 
consider perspectives from various geographies, cultures, and of course from 
each gender. I want to maintain a welcoming destination where both novice and 
veteran contributors will thrive.

4) Continue our current work on our “One Platform” pursuit, and help to refine 
which of our teams should remain in openstack, and which should not. I will 
also work to contribute to documenting our culture and systems and clearly 
defining “how we work”. For an example of this, see how we recently did this 
within the Magnum team [2]. We can borrow from these ideas and re-use the ones 
that are generally useful. This reference should give you a sense of what we 
can accomplish together.

I respectfully ask for your vote and support to pursue this next ambition, and 
I look forward to the honor of serving you well.

Thanks,

Adrian Otto

[1] https://review.openstack.org/454908
[2] https://docs.openstack.org/developer/magnum/policies.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-27 Thread Adrian Otto
> On Mar 22, 2017, at 5:48 AM, Ricardo Rocha <rocha.po...@gmail.com> wrote:
> 
> Hi.
> 
> One simplification would be:
> openstack coe create/list/show/config/update
> openstack coe template create/list/show/update
> openstack coe ca show/sign

I like Ricardo’s suggestion above. I think we should decide between the option 
above (Option 1), and this one (Option 2):

openstack coe cluster create/list/show/config/update
openstack coe cluster template create/list/show/update
openstack coe ca show/sign

Both options are clearly unique to magnum, and are unlikely to cause any future 
collisions with other projects. If you have a preference, please express it so 
we can consider your input and proceed with the implementation. I have a slight 
preference for Option 2 because it more closely reflects how I think about what 
the commands do, and follows the noun/verb pattern correctly. Please share your 
feedback.

Thanks,

Adrian

> This covers all the required commands and is a bit less verbose. The
> cluster word is too generic and probably adds no useful info.
> 
> Whatever it is, kerberos support for the magnum client is very much
> needed and welcome! :)
> 
> Cheers,
>  Ricardo
> 
> On Tue, Mar 21, 2017 at 2:54 PM, Spyros Trigazis <strig...@gmail.com> wrote:
>> IMO, coe is a little confusing. It is a term used by people related somehow
>> to the magnum community. When I describe to users how to use magnum,
>> I spent a few moments explaining what we call coe.
>> 
>> I prefer one of the following:
>> * openstack magnum cluster create|delete|...
>> * openstack mcluster create|delete|...
>> * both the above
>> 
>> It is very intuitive for users because, they will be using an openstack
>> cloud
>> and they will be wanting to use the magnum service. So, it only make sense
>> to type openstack magnum cluster or mcluster which is shorter.
>> 
>> 
>> On 21 March 2017 at 02:24, Qiming Teng <teng...@linux.vnet.ibm.com> wrote:
>>> 
>>> On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
>>>> On 03/20/2017 03:08 PM, Adrian Otto wrote:
>>>>> Team,
>>>>> 
>>>>> Stephen Watson has been working on an magnum feature to add magnum
>>>>> commands to the openstack client by implementing a plugin:
>>>>> 
>>>> 
>>>>>> https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
>>>>> 
>>>>> In review of this work, a question has resurfaced, as to what the
>>>>> client command name should be for magnum related commands. Naturally, we’d
>>>>> like to have the name “cluster” but that word is already in use by Senlin.
>>>> 
>>>> Unfortunately, the Senlin API uses a whole bunch of generic terms as
>>>> top-level REST resources, including "cluster", "event", "action",
>>>> "profile", "policy", and "node". :( I've warned before that use of
>>>> these generic terms in OpenStack APIs without a central group
>>>> responsible for curating the API would lead to problems like this.
>>>> This is why, IMHO, we need the API working group to be ultimately
>>>> responsible for preventing this type of thing from happening.
>>>> Otherwise, there ends up being a whole bunch of duplication and same
>>>> terms being used for entirely different things.
>>>> 
>>> 
>>> Well, I believe the name and namespaces used by Senlin is very clean.
>>> Please see the following outputs. All commands are contained in the
>>> cluster namespace to avoid any conflicts with any other projects.
>>> 
>>> On the other hand, is there any document stating that Magnum is about
>>> providing clustering service? Why Magnum cares so much about the top
>>> level noun if it is not its business?
>> 
>> 
>> From magnum's wiki page [1]:
>> "Magnum uses Heat to orchestrate an OS image which contains Docker
>> and Kubernetes and runs that image in either virtual machines or bare
>> metal in a cluster configuration."
>> 
>> Many services may offer clusters indirectly. Clusters is NOT magnum's focus,
>> but we can't refer to a collection of virtual machines or physical servers
>> with
>> another name. Bay proven to be confusing to users. I don't think that magnum
>> should reserve the cluster noun, even if it was available.
>> 
>> [1] https://wiki.openstack.org/wiki/Magnum
>> 
>>> 
>>> 
>>> 
>>> $ openstack 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Clint,

On Mar 20, 2017, at 3:02 PM, Clint Byrum 
<cl...@fewbar.com<mailto:cl...@fewbar.com>> wrote:

Excerpts from Adrian Otto's message of 2017-03-20 21:16:09 +:
Jay,

On Mar 20, 2017, at 12:35 PM, Jay Pipes 
<jaypi...@gmail.com<mailto:jaypi...@gmail.com><mailto:jaypi...@gmail.com>> 
wrote:

On 03/20/2017 03:08 PM, Adrian Otto wrote:
Team,

Stephen Watson has been working on an magnum feature to add magnum commands to 
the openstack client by implementing a plugin:

https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc

In review of this work, a question has resurfaced, as to what the client 
command name should be for magnum related commands. Naturally, we’d like to 
have the name “cluster” but that word is already in use by Senlin.

Unfortunately, the Senlin API uses a whole bunch of generic terms as top-level 
REST resources, including "cluster", "event", "action", "profile", "policy", 
and "node". :( I've warned before that use of these generic terms in OpenStack 
APIs without a central group responsible for curating the API would lead to 
problems like this. This is why, IMHO, we need the API working group to be 
ultimately responsible for preventing this type of thing from happening. 
Otherwise, there ends up being a whole bunch of duplication and same terms 
being used for entirely different things.

Stephen opened a discussion with Dean Troyer about this, and found that “infra” 
might be a suitable name and began using that, and multiple team members are 
not satisfied with it.

Yeah, not sure about "infra". That is both too generic and not an actual 
"thing" that Magnum provides.

The name “magnum” was excluded from consideration because OSC aims to be 
project name agnostic. We know that no matter what word we pick, it’s not going 
to be ideal. I’ve added an agenda on our upcoming team meeting to judge 
community consensus about which alternative we should select:

https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC

Current choices on the table are:

* c_cluster (possible abbreviation alias for container_infra_cluster)
* coe_cluster
* mcluster
* infra

For example, our selected name would appear in “openstack …” commands. Such as:

$ openstack c_cluster create …

If you have input to share, I encourage you to reply to this thread, or come to 
the team meeting so we can consider your input before the team makes a 
selection.

What is Magnum's service-types-authority service_type?

I propose "coe-cluster” for that, but that should be discussed further, as it’s 
impossible for magnum to conform with all the requirements for service types 
because they fundamentally conflict with each other:

https://review.openstack.org/447694

In the past we referred to this type as a “bay” but found it burdensome for 
users and operators to use that term when literally bay == cluster. We just 
needed to call it what it is because there’s a prevailing name for that 
concept, and everyone expects that’s what it’s called.

I Think Jay was asking for Magnum's name in the catalog:

Which is 'container-infra' according to this:

https://github.com/openstack/python-magnumclient/blob/master/magnumclient/v1/client.py#L34

I was unsure, so I found him on IRC to clarify, and he pointed me to the 
openstack/service-types-authority repository, where I submitted patch 445694 
for review. We have three distinct identifiers in play:

1) Our existing service catalog entry name: container-infra
2) Our openstack client noun: TBD, decision expected from our team tomorrow. My 
suggestion: "coe cluster”.
3) Our (proposed) service type: coe-cluster

Each identifier has respective guidelines and limits, so they differ.

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Dean,

Thanks for your reply.

> On Mar 20, 2017, at 2:18 PM, Dean Troyer <dtro...@gmail.com> wrote:
> 
> On Mon, Mar 20, 2017 at 3:37 PM, Adrian Otto <adrian.o...@rackspace.com> 
> wrote:
>> the  argument is actually the service name, such as “ec2”. This is 
>> the same way the openstack cli works. Perhaps there is another tool that you 
>> are referring to. Have I misunderstood something?
> 
> I am going to jump in here and clarify one thing.  OSC does not do
> project namespacing, or any other sort of namespacing for its resource
> names.  It uses qualified resource names (fully-qualified even?).  In
> some cases this results in something that looks a lot like
> namespacing, but it isn't. The Volume API commands are one example of
> this, nearly every resource there includes the word 'volume' but not
> because that is the API name, it is because that is the correct name
> for those resources ('volume backup', etc).

Okay, that makes sense, thanks.

>> We could so the same thing and use the text “container_infra”, but we felt 
>> that might be burdensome for interactive use and wanted to find something 
>> shorter that would still make sense.
> 
> Naming resources is hard to get right.  Here's my throught process:
> 
> For OSC, start with how to describe the specific 'thing' being
> manipulated.  In this case, it is some kind of cluster.  In the list
> you posted in the first email, 'coe cluster' seems to be the best
> option.  I think 'coe' is acceptable as an abbreviation (we usually do
> not use them) because that is a specific term used in the field and
> satisfies the 'what kind of cluster?' question.  No underscores
> please, and in fact no dash here, resource names have spaces in them.

So, to be clear, this would result in the following command for what we 
currently use “magnum cluster create” for:

openstack coe cluster create …

Is this right?

Adrian

> 
> dt
> 
> -- 
> 
> Dean Troyer
> dtro...@gmail.com
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Jay,

On Mar 20, 2017, at 12:35 PM, Jay Pipes 
<jaypi...@gmail.com<mailto:jaypi...@gmail.com>> wrote:

On 03/20/2017 03:08 PM, Adrian Otto wrote:
Team,

Stephen Watson has been working on an magnum feature to add magnum commands to 
the openstack client by implementing a plugin:

https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc

In review of this work, a question has resurfaced, as to what the client 
command name should be for magnum related commands. Naturally, we’d like to 
have the name “cluster” but that word is already in use by Senlin.

Unfortunately, the Senlin API uses a whole bunch of generic terms as top-level 
REST resources, including "cluster", "event", "action", "profile", "policy", 
and "node". :( I've warned before that use of these generic terms in OpenStack 
APIs without a central group responsible for curating the API would lead to 
problems like this. This is why, IMHO, we need the API working group to be 
ultimately responsible for preventing this type of thing from happening. 
Otherwise, there ends up being a whole bunch of duplication and same terms 
being used for entirely different things.

>Stephen opened a discussion with Dean Troyer about this, and found that 
>“infra” might be a suitable name and began using that, and multiple team 
>members are not satisfied with it.

Yeah, not sure about "infra". That is both too generic and not an actual 
"thing" that Magnum provides.

> The name “magnum” was excluded from consideration because OSC aims to be 
> project name agnostic. We know that no matter what word we pick, it’s not 
> going to be ideal. I’ve added an agenda on our upcoming team meeting to judge 
> community consensus about which alternative we should select:

https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC

Current choices on the table are:

 * c_cluster (possible abbreviation alias for container_infra_cluster)
 * coe_cluster
 * mcluster
 * infra

For example, our selected name would appear in “openstack …” commands. Such as:

$ openstack c_cluster create …

If you have input to share, I encourage you to reply to this thread, or come to 
the team meeting so we can consider your input before the team makes a 
selection.

What is Magnum's service-types-authority service_type?

I propose "coe-cluster” for that, but that should be discussed further, as it’s 
impossible for magnum to conform with all the requirements for service types 
because they fundamentally conflict with each other:

https://review.openstack.org/447694

In the past we referred to this type as a “bay” but found it burdensome for 
users and operators to use that term when literally bay == cluster. We just 
needed to call it what it is because there’s a prevailing name for that 
concept, and everyone expects that’s what it’s called.

Adrian


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Hongbin,

> On Mar 20, 2017, at 1:10 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
> 
> Zun had a similar issue of colliding on the keyword "container", and we chose 
> to use an alternative term "appcontainer" that is not perfect but acceptable. 
> IMHO, this kind of top-level name collision issue would be better resolved by 
> introducing namespace per project, which is the approach adopted by AWS.

Can you explain this further, please? My understanding is that the AWS cli tool 
has a single global namespace for commands in the form:

aws [options]   [parameters]

the  argument is actually the service name, such as “ec2”. This is the 
same way the openstack cli works. Perhaps there is another tool that you are 
referring to. Have I misunderstood something?

We could so the same thing and use the text “container_infra”, but we felt that 
might be burdensome for interactive use and wanted to find something shorter 
that would still make sense.

Thanks,

Adrian

> 
> Best regards,
> Hongbin
> 
>> -Original Message-
>> From: Jay Pipes [mailto:jaypi...@gmail.com]
>> Sent: March-20-17 3:35 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum
>> commands in osc?
>> 
>> On 03/20/2017 03:08 PM, Adrian Otto wrote:
>>> Team,
>>> 
>>> Stephen Watson has been working on an magnum feature to add magnum
>> commands to the openstack client by implementing a plugin:
>>> 
>>> 
>> https://review.openstack.org/#/q/status:open+project:openstack/python-
>>> magnumclient+osc
>>> 
>>> In review of this work, a question has resurfaced, as to what the
>> client command name should be for magnum related commands. Naturally,
>> we’d like to have the name “cluster” but that word is already in use by
>> Senlin.
>> 
>> Unfortunately, the Senlin API uses a whole bunch of generic terms as
>> top-level REST resources, including "cluster", "event", "action",
>> "profile", "policy", and "node". :( I've warned before that use of
>> these generic terms in OpenStack APIs without a central group
>> responsible for curating the API would lead to problems like this. This
>> is why, IMHO, we need the API working group to be ultimately
>> responsible for preventing this type of thing from happening. Otherwise,
>> there ends up being a whole bunch of duplication and same terms being
>> used for entirely different things.
>> 
>>> Stephen opened a discussion with Dean Troyer about this, and found
>> that “infra” might be a suitable name and began using that, and
>> multiple team members are not satisfied with it.
>> 
>> Yeah, not sure about "infra". That is both too generic and not an
>> actual "thing" that Magnum provides.
>> 
>>> The name “magnum” was excluded from consideration because OSC aims
>> to be project name agnostic. We know that no matter what word we pick,
>> it’s not going to be ideal. I’ve added an agenda on our upcoming team
>> meeting to judge community consensus about which alternative we should
>> select:
>>> 
>>> https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-
>> 03
>>> -21_1600_UTC
>>> 
>>> Current choices on the table are:
>>> 
>>>  * c_cluster (possible abbreviation alias for
>> container_infra_cluster)
>>>  * coe_cluster
>>>  * mcluster
>>>  * infra
>>> 
>>> For example, our selected name would appear in “openstack …” commands.
>> Such as:
>>> 
>>> $ openstack c_cluster create …
>>> 
>>> If you have input to share, I encourage you to reply to this thread,
>> or come to the team meeting so we can consider your input before the
>> team makes a selection.
>> 
>> What is Magnum's service-types-authority service_type?
>> 
>> Best,
>> -jay
>> 
>> ___
>> ___
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Kevin,

I added that to the list for consideration.Feel free to add others to the list 
on the team agenda using our Wiki page.

Adrian

> On Mar 20, 2017, at 12:27 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
> 
> What about coe?
> 
> Thanks,
> Kevin
> ________
> From: Adrian Otto [adrian.o...@rackspace.com]
> Sent: Monday, March 20, 2017 12:08 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [magnum][osc] What name to use for magnum commands   
>   in osc?
> 
> Team,
> 
> Stephen Watson has been working on an magnum feature to add magnum commands 
> to the openstack client by implementing a plugin:
> 
> https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> 
> In review of this work, a question has resurfaced, as to what the client 
> command name should be for magnum related commands. Naturally, we’d like to 
> have the name “cluster” but that word is already in use by Senlin. Stephen 
> opened a discussion with Dean Troyer about this, and found that “infra” might 
> be a suitable name and began using that, and multiple team members are not 
> satisfied with it. The name “magnum” was excluded from consideration because 
> OSC aims to be project name agnostic. We know that no matter what word we 
> pick, it’s not going to be ideal. I’ve added an agenda on our upcoming team 
> meeting to judge community consensus about which alternative we should select:
> 
> https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC
> 
> Current choices on the table are:
> 
>  * c_cluster (possible abbreviation alias for container_infra_cluster)
>  * coe_cluster
>  * mcluster
>  * infra
> 
> For example, our selected name would appear in “openstack …” commands. Such 
> as:
> 
> $ openstack c_cluster create …
> 
> If you have input to share, I encourage you to reply to this thread, or come 
> to the team meeting so we can consider your input before the team makes a 
> selection.
> 
> Thanks,
> 
> Adrian
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Team,

Stephen Watson has been working on an magnum feature to add magnum commands to 
the openstack client by implementing a plugin:

https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc

In review of this work, a question has resurfaced, as to what the client 
command name should be for magnum related commands. Naturally, we’d like to 
have the name “cluster” but that word is already in use by Senlin. Stephen 
opened a discussion with Dean Troyer about this, and found that “infra” might 
be a suitable name and began using that, and multiple team members are not 
satisfied with it. The name “magnum” was excluded from consideration because 
OSC aims to be project name agnostic. We know that no matter what word we pick, 
it’s not going to be ideal. I’ve added an agenda on our upcoming team meeting 
to judge community consensus about which alternative we should select:

https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC

Current choices on the table are:

  * c_cluster (possible abbreviation alias for container_infra_cluster)
  * coe_cluster
  * mcluster
  * infra

For example, our selected name would appear in “openstack …” commands. Such as:

$ openstack c_cluster create …

If you have input to share, I encourage you to reply to this thread, or come to 
the team meeting so we can consider your input before the team makes a 
selection.

Thanks,

Adrian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Make certs insecure in magnum drivers

2017-02-10 Thread Adrian Otto
I have opened the following bug ticket for this issue:

https://bugs.launchpad.net/magnum/+bug/1663757

Regards,

Adrian

On Feb 10, 2017, at 1:46 PM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:

What I’d like to see in this case is to use secure connections by default, and 
to make workarounds for self signed certificates or other optional workarounds 
for those who need them. I would have voted against patch set 383493. It’s also 
not linked to a bug ticket, which we normally require prior to merge. I’ll see 
if I can track down the author to see about fixing this properly, or if there 
is a volunteer to do this better, I’m open to that too.

Adrian

On Feb 10, 2017, at 2:05 AM, Kevin Lefevre 
<lefevre.ke...@gmail.com<mailto:lefevre.ke...@gmail.com>> wrote:

Hi,

This change (https://review.openstack.org/#/c/383493/) makes certificates 
request to magnum_api insecure since is a common use case.

In swarm drivers, the make-cert.py script is in python whereas in K8s for 
CoreOS and Atomic, it is a shell script.

I wanted to make the change (https://review.openstack.org/#/c/430755/) but it 
gets flagged by bandit because of python requests pacakage insecure TLS.

I know that we should supports Custom CA in the futur but if right now (and 
according to the previous merged change) insecure request are by default, what 
should we do ?

Do we disable bandit for the the swarm drivers ? Or do you use the same scripts 
(and keep it as simple as possible) for all the drivers, possibly without 
python as it is not included in CoreOS.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Make certs insecure in magnum drivers

2017-02-10 Thread Adrian Otto
What I’d like to see in this case is to use secure connections by default, and 
to make workarounds for self signed certificates or other optional workarounds 
for those who need them. I would have voted against patch set 383493. It’s also 
not linked to a bug ticket, which we normally require prior to merge. I’ll see 
if I can track down the author to see about fixing this properly, or if there 
is a volunteer to do this better, I’m open to that too.

Adrian

> On Feb 10, 2017, at 2:05 AM, Kevin Lefevre  wrote:
> 
> Hi,
> 
> This change (https://review.openstack.org/#/c/383493/) makes certificates 
> request to magnum_api insecure since is a common use case.
> 
> In swarm drivers, the make-cert.py script is in python whereas in K8s for 
> CoreOS and Atomic, it is a shell script.
> 
> I wanted to make the change (https://review.openstack.org/#/c/430755/) but it 
> gets flagged by bandit because of python requests pacakage insecure TLS.
> 
> I know that we should supports Custom CA in the futur but if right now (and 
> according to the previous merged change) insecure request are by default, 
> what should we do ?
> 
> Do we disable bandit for the the swarm drivers ? Or do you use the same 
> scripts (and keep it as simple as possible) for all the drivers, possibly 
> without python as it is not included in CoreOS.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [Horizon][Karbor][Magnum] Requesting FFE for xstatic packages

2017-02-08 Thread Adrian Otto
I voted to merge [1] and [2]:

[1] https://review.openstack.org/#/c/429753/
[2] https://review.openstack.org/#/c/430941/

FFE approved for Magnum, provided this does not cause problems for other 
projects.

Adrian

On Feb 8, 2017, at 7:25 AM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:

We are actively working to verify that magnum-ui works with the adjusted 
requirements.txt, and as soon as we have confirmed this change is 
non-disruptive, I will be ready to approve the FFE.

Adrian

On Feb 7, 2017, at 4:54 PM, Richard Jones 
<r1chardj0...@gmail.com<mailto:r1chardj0...@gmail.com>> wrote:

It looks like Magnum-UI only has one xstatic package in their
requirements that isn't already in Horizon's requirements (and
therefore is superfluous), and that's xstatic-magic-search, which has
been replaced in Horizon by pulling magic search into the Horizon tree
(we forked because maintaining our own extensions against the package
was getting out of hand - we'd basically rewritten a large proportion
of the code).

I would recommend that the Magnum-UI project remove all xstatic
packages from requirements.txt


  Richard

On 7 February 2017 at 14:17, Tony Breeds 
<t...@bakeyournoodle.com<mailto:t...@bakeyournoodle.com>> wrote:
On Tue, Feb 07, 2017 at 10:39:41AM +1100, Richard Jones wrote:
Hi requirements team,

We've had a downstream packager come to us with issues packaging the
Horizon RC as described in this bug report:

https://bugs.launchpad.net/horizon/+bug/1662180

The issues stems from the requirements file having several xstatic
package minimum versions specified that are no longer compatible with
Horizon, and the RDO build system honors those minimum version
specifications, and boom!

This is a specific case of OpenStack provides poor tools for testing/validating
minimum requirements.  This is a thing we started trying to fix in Ocata but
the work is slow going :(   I'm a little confused how this wasn't caught sooner
by RDO (given they would appear to have been testing the minimums for xstatic-*)

Rob Cresswell has proposed a patch to bump those minimum versions up
to the versions specified in upper-constraints.txt:

https://review.openstack.org/#/c/429753

That review seems to adjust all Xstatic packages where the minimu != the
constrained version which is probably more than is required but it doesn't
actually increase the knock-on effects so it seems like a good idea to me :)

Looking at the projects that are affected by Rob's review:

Package  : xstatic-angular [xstatic-angular>=1.3.7] (used by 3 projects)
Package  : xstatic-angular-bootstrap [xstatic-angular-bootstrap>=0.11.0.2] 
(used by 3 projects)
Package  : xstatic-angular-gettext [xstatic-angular-gettext>=2.1.0.2] (used 
by 3 projects)
Package  : xstatic-bootstrap-scss [xstatic-bootstrap-scss>=3.1.1.1] (used 
by 3 projects)
Package  : xstatic-d3 [xstatic-d3>=3.1.6.2] (used by 3 projects)
Package  : xstatic-font-awesome [xstatic-font-awesome>=4.3.0] (used by 3 
projects)
Package  : xstatic-jasmine [xstatic-jasmine>=2.1.2.0] (used by 3 projects)
Package  : xstatic-jsencrypt [xstatic-jsencrypt>=2.0.0.2] (used by 3 
projects)
Package  : xstatic-rickshaw [xstatic-rickshaw>=1.5.0] (used by 3 projects)
Package  : xstatic-smart-table [xstatic-smart-table!=1.4.13.0,>=1.4.5.3] 
(used by 3 projects)
Package  : xstatic-term-js [xstatic-term-js>=0.0.4.1] (used by 3 projects)
openstack/horizon [tc:approved-release]
openstack/karbor-dashboard[]
openstack/magnum-ui   []


Package  : xstatic-bootswatch [xstatic-bootswatch>=3.3.5.3] (used by 1 
projects)
openstack/horizon [tc:approved-release]

And obviously RDO

This will mean that Horizon will need an RC2, and any packaging/distro testing
for horizon (and plugins/dashboards) will need to be restarted (iff said
testing was done with an xstatic package not listed in 
upper-constraaints.txt[1])

I tried to determine the impact on magnum-ui and karbor-dashboard and AFAICT
they're already using constraints.  The next thing to look at is the release
model which is:
  magnum-ui:
   type: horizon-plugin
   model: cycle-with-intermediary
  karbor-dashboard:
   type:  unknown
   model: unknown

I think this means it's safe grant this FFE as the affected plugins aren't
necessarily in a stabilisation phase.

So as far as I can see we have 2 options:
  1. Do nothing: there will be other cases that minimums are not functional.
 RDO have tools and data to fix this in there own repos so we're not
 actually blocking them
  2. Take the patch, and accept the knock on effects.

I'm okay with taking this FFE if Karbor and Magnum PTLs sign off here (or on 
the review)

Additionally to the above I will be proposing a patch to Horizon's
documented processes to ensure that when

Re: [openstack-dev] [requirements] [Horizon][Karbor][Magnum] Requesting FFE for xstatic packages

2017-02-08 Thread Adrian Otto
We are actively working to verify that magnum-ui works with the adjusted 
requirements.txt, and as soon as we have confirmed this change is 
non-disruptive, I will be ready to approve the FFE.

Adrian

> On Feb 7, 2017, at 4:54 PM, Richard Jones  wrote:
> 
> It looks like Magnum-UI only has one xstatic package in their
> requirements that isn't already in Horizon's requirements (and
> therefore is superfluous), and that's xstatic-magic-search, which has
> been replaced in Horizon by pulling magic search into the Horizon tree
> (we forked because maintaining our own extensions against the package
> was getting out of hand - we'd basically rewritten a large proportion
> of the code).
> 
> I would recommend that the Magnum-UI project remove all xstatic
> packages from requirements.txt
> 
> 
>Richard
> 
> On 7 February 2017 at 14:17, Tony Breeds  wrote:
>> On Tue, Feb 07, 2017 at 10:39:41AM +1100, Richard Jones wrote:
>>> Hi requirements team,
>>> 
>>> We've had a downstream packager come to us with issues packaging the
>>> Horizon RC as described in this bug report:
>>> 
>>> https://bugs.launchpad.net/horizon/+bug/1662180
>>> 
>>> The issues stems from the requirements file having several xstatic
>>> package minimum versions specified that are no longer compatible with
>>> Horizon, and the RDO build system honors those minimum version
>>> specifications, and boom!
>> 
>> This is a specific case of OpenStack provides poor tools for 
>> testing/validating
>> minimum requirements.  This is a thing we started trying to fix in Ocata but
>> the work is slow going :(   I'm a little confused how this wasn't caught 
>> sooner
>> by RDO (given they would appear to have been testing the minimums for 
>> xstatic-*)
>> 
>>> Rob Cresswell has proposed a patch to bump those minimum versions up
>>> to the versions specified in upper-constraints.txt:
>>> 
>>>  https://review.openstack.org/#/c/429753
>> 
>> That review seems to adjust all Xstatic packages where the minimu != the
>> constrained version which is probably more than is required but it doesn't
>> actually increase the knock-on effects so it seems like a good idea to me :)
>> 
>> Looking at the projects that are affected by Rob's review:
>> 
>> Package  : xstatic-angular [xstatic-angular>=1.3.7] (used by 3 projects)
>> Package  : xstatic-angular-bootstrap 
>> [xstatic-angular-bootstrap>=0.11.0.2] (used by 3 projects)
>> Package  : xstatic-angular-gettext [xstatic-angular-gettext>=2.1.0.2] 
>> (used by 3 projects)
>> Package  : xstatic-bootstrap-scss [xstatic-bootstrap-scss>=3.1.1.1] 
>> (used by 3 projects)
>> Package  : xstatic-d3 [xstatic-d3>=3.1.6.2] (used by 3 projects)
>> Package  : xstatic-font-awesome [xstatic-font-awesome>=4.3.0] (used by 3 
>> projects)
>> Package  : xstatic-jasmine [xstatic-jasmine>=2.1.2.0] (used by 3 
>> projects)
>> Package  : xstatic-jsencrypt [xstatic-jsencrypt>=2.0.0.2] (used by 3 
>> projects)
>> Package  : xstatic-rickshaw [xstatic-rickshaw>=1.5.0] (used by 3 
>> projects)
>> Package  : xstatic-smart-table [xstatic-smart-table!=1.4.13.0,>=1.4.5.3] 
>> (used by 3 projects)
>> Package  : xstatic-term-js [xstatic-term-js>=0.0.4.1] (used by 3 
>> projects)
>> openstack/horizon [tc:approved-release]
>> openstack/karbor-dashboard[]
>> openstack/magnum-ui   []
>> 
>> 
>> Package  : xstatic-bootswatch [xstatic-bootswatch>=3.3.5.3] (used by 1 
>> projects)
>> openstack/horizon [tc:approved-release]
>> 
>> And obviously RDO
>> 
>> This will mean that Horizon will need an RC2, and any packaging/distro 
>> testing
>> for horizon (and plugins/dashboards) will need to be restarted (iff said
>> testing was done with an xstatic package not listed in 
>> upper-constraaints.txt[1])
>> 
>> I tried to determine the impact on magnum-ui and karbor-dashboard and AFAICT
>> they're already using constraints.  The next thing to look at is the release
>> model which is:
>>magnum-ui:
>> type: horizon-plugin
>> model: cycle-with-intermediary
>>karbor-dashboard:
>> type:  unknown
>> model: unknown
>> 
>> I think this means it's safe grant this FFE as the affected plugins aren't
>> necessarily in a stabilisation phase.
>> 
>> So as far as I can see we have 2 options:
>>1. Do nothing: there will be other cases that minimums are not functional.
>>   RDO have tools and data to fix this in there own repos so we're not
>>   actually blocking them
>>2. Take the patch, and accept the knock on effects.
>> 
>> I'm okay with taking this FFE if Karbor and Magnum PTLs sign off here (or on 
>> the review)
>> 
>>> Additionally to the above I will be proposing a patch to Horizon's
>>> documented processes to ensure that when an xstatic upper-constraints
>>> version is bumped we also bump the minimum version in
>>> global-requirements to 

[openstack-dev] [magnum] PTL Candidacy for Pike

2017-01-28 Thread Adrian Otto
Team,

I announce my candidacy for, and respectfully request your support to serve as 
your Magnum PTL again for the Pike release cycle.

Here are are my achievements and OpenStack experience and that make me the 
best choice for this role:

* Founder of the OpenStack Containers Team
* Established vision and specification for Magnum
* Founding PTL for Magnum
* Core reviewer since the first line of code was contributed in Nov 2014
* Added Magnum to OpenStack
* Led numerous mid cycle meetups as PTL
* 3 terms of experience as elected PTL for Solum
* Involved with OpenStack since Austin Design Summit in 2010

What background and skills help me to serve in this role well:

* Over 20 years of experience in technical leadership positions
* Unmatched experience leading multi-organization collaborations
* Diplomacy skills for inclusion of numerous viewpoints
* Ability to drive consensus and shared vision
* Considerable experience in public speaking, including multiple keynotes at 
OpenStack Summits, and numerous appearances at other events.
* Leadership of collaborative OpenStack design summit sessions
* Deep belief in Open Source, Open Development, Open Design, and Open Community
* I love OpenStack and I love using containers

During the Ocata cycle we worked toward enabling the next generation
of applications with a design for complex clusters, and better integrating
our clusters with OpenStack services for networking and storage.

I am only running for PTL for one project, because I want Magnum to be my
primary focus.  I look forward to your vote, and continued success together.

Thanks,

Adrian Otto



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Feature freeze coming today

2017-01-23 Thread Adrian Otto
Team,

I will be starting our feature freeze today. We have a few more patches to 
consider for merge before we enter the freeze. I’ll let you all know when each 
has been considered, and we are ready to begin the freeze.

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Magnum team at Summit?

2017-01-18 Thread Adrian Otto

On Jan 18, 2017, at 10:48 AM, Mark Baker 
<mark.ba...@canonical.com<mailto:mark.ba...@canonical.com>> wrote:

Hi Adrian,

Let me know if you have similar questions or concerns about Ubuntu Core with 
Magnum.

Mark

Thanks Mark! Is there any chance you, or an Ubuntu Core representative could 
join us for a discussion at the PTG, and/or an upcoming IRC team meeting? The 
topic of supported operating system images for our cluster drivers is a current 
topic of team conversation, and it would be helpful to have clarity on what 
(support/dev/test) resources upstream Linux packagers may be able to offer to 
help guide our conversation.

To give you a sense, we do have a Suse specific k8s driver that has been 
maturing during the Ocata release cycle, our Mesos driver uses Ubuntu Server, 
our Swarm and k8s drivers use Fedora Atomic, and another newer k8s driver uses 
Fedora. The topic of Operating System (OS) support for cluster nodes (versus 
what OS containers are based on) is confusing for many cloud operators, so it 
would be helpful we worked on clarifying the options, and involve stakeholders 
from various OS distributions so that suitable options are available for those 
who prefer to form Magnum clusters from OS images composed from one particular 
OS or another.

Ideally we could have this discussion at the PTG in Atlanta with participants 
like our core reviewers, Josh Berkus, you, our Suse contributors, and any other 
representatives from OS distribution organizations who may have an interest in 
cluster drivers for their respective OS types. If that discussion proves 
productive, we could also engage our wider contributor base in a followup IRC 
team meeting with a dedicated agenda item to cover what’s possible, and 
summarize what various stakeholders provided to us as input at the PTG. This 
might give us a chance to source further input from a wider audience than our 
PTG attendees.

Thoughts?

Thanks,

Adrian


On 18 Jan 2017 8:36 p.m., "Adrian Otto" 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
Josh,

> On Jan 18, 2017, at 10:18 AM, Josh Berkus 
> <jber...@redhat.com<mailto:jber...@redhat.com>> wrote:
>
> Magnum Devs:
>
> Is there going to be a magnum team meeting around OpenStack Summit in
> Boston?
>
> I'm the community manager for Atomic Host, so if you're going to have
> Magnum meetings, I'd like to send you some Atomic engineers to field any
> questions/issues at the Summit.

Thanks for your question. We are planning to have our team design meetings at 
the upcoming PTG event in Atlanta. We are not currently planning to have any 
such meetings in Boston. With that said, we would very much like to involve you 
in an important Atomic related design decision that has recently surfaced, and 
would like to welcome you to an upcoming Magnum IRC team meeting to meet you 
and explain our interests and concerns. I do expect to attend the Boston summit 
myself, so I’m willing to meet you and your engineers on behalf of our team if 
you are unable to attend the PTG. I’ll reach out to you individually by email 
to explore our options for an Atomic Host meeting agenda item in the mean time.

Regards,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Magnum team at Summit?

2017-01-18 Thread Adrian Otto
Josh,

> On Jan 18, 2017, at 10:18 AM, Josh Berkus  wrote:
> 
> Magnum Devs:
> 
> Is there going to be a magnum team meeting around OpenStack Summit in
> Boston?
> 
> I'm the community manager for Atomic Host, so if you're going to have
> Magnum meetings, I'd like to send you some Atomic engineers to field any
> questions/issues at the Summit.

Thanks for your question. We are planning to have our team design meetings at 
the upcoming PTG event in Atlanta. We are not currently planning to have any 
such meetings in Boston. With that said, we would very much like to involve you 
in an important Atomic related design decision that has recently surfaced, and 
would like to welcome you to an upcoming Magnum IRC team meeting to meet you 
and explain our interests and concerns. I do expect to attend the Boston summit 
myself, so I’m willing to meet you and your engineers on behalf of our team if 
you are unable to attend the PTG. I’ll reach out to you individually by email 
to explore our options for an Atomic Host meeting agenda item in the mean time.

Regards,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Adrian Otto

> On Jan 16, 2017, at 11:02 AM, Dave McCowan (dmccowan)  
> wrote:
> 
> On 1/16/17, 11:52 AM, "Ian Cordasco"  wrote:
> 
>> -Original Message-
>> From: Rob C 
>> Reply: OpenStack Development Mailing List (not for usage questions)
>> 
>> Date: January 16, 2017 at 10:33:20
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
>> projects trying to avoid Barbican, still?
>> 
>>> Thanks for raising this on the mailing list Ian, I too share some of
>>> your consternation regarding this issue.
>>> 
>>> I think the main point has already been hit on, developers don't want to
>>> require that Barbican be deployed in order for their service to be
>>> used.
>>> 
>>> The resulting spread of badly audited secret management code is pretty
>>> ugly and makes certifying OpenStack for some types of operation very
>>> difficult, simply listing where key management "happens" and what
>>> protocols are in use quickly becomes a non-trivial operation with some
>>> teams using hard coded values while others use configurable algorithms
>>> and no connection between any of them.
>>> 
>>> In some ways I think that the castellan project was supposed to help
>>> address the issue. The castellan documentation[1] is a little sparse but
>>> my understanding is that it exists as an abstraction layer for
>>> key management, such that a service can just be set to use castellan,
>>> which in turn can be told to use either a local key-manager, provided by
>>> the project or Barbican when it is available.
>>> 
>>> Perhaps a miss-step previously was that Barbican made no efforts to
>>> really provide a robust non-HSM mode of operation. An obvious contrast
>>> here is with Hashicorp Vault[2] which has garnered significant market
>>> share in key management because it's software-only* mode of operation is
>>> well documented, robust and cryptographically sound. I think that the
>>> lack of a sane non-HSM mode, has resulted in developers trying to create
>>> their own and contributed to the situation.

Bingo!

>>> I'd be interested to know if development teams would be less concerned
>>> about requiring Barbican deployments, if it had a robust non-HSM
>>> (i.e software only) mode of operation. Lowering the cost of deployment
>>> for organisations that want sensible key management without the expense
>>> of deploying multi-site HSMs.
>>> 
>>> * Vault supports HSM deployments also
>>> 
>>> [1] http://docs.openstack.org/developer/castellan/
>>> [2] https://www.vaultproject.io/
>> 
>> The last I checked, Rob, they also support DogTag IPA which is purely
>> a Software based HSM. Hopefully the Barbican team can confirm this.
>> --
>> Ian Cordasco
> 
> Yep.  Barbican supports four backend secret stores. [1]
> 
> The first (Simple Crypto) is easy to deploy, but not extraordinarily
> secure, since the secrets are encrypted using a static key defined in the
> barbican.conf file.
> 
> The second and third (PKCS#11 and KMIP) are secure, but require an HSM as
> a hardware base to encrypt and/or store the secrets.
> The fourth (Dogtag) is secure, but requires a deployment of Dogtag to
> encrypt and store the secrets.
> 
> We do not currently have a secret store that is both highly secure and
> easy to deploy/manage.
> 
> We, the Barbican community, are very open to any ideas, blueprints, or
> patches on how to achieve this.
> In any of the homegrown per-project secret stores, has a solution been
> developed that solves both of these?
> 
> 
> [1] 
> http://docs.openstack.org/project-install-guide/key-manager/draft/barbican-
> backend.html

The above list of four backend secret stores, each with serious drawbacks is 
the reason why Barbican has not been widely adopted. Other projects are 
reluctant to depend on Barbican because it’s not present in most clouds. 
Magnum, for example believed that using Barbican for certificate storage was 
the correct design, and we implemented our solution such that it required 
Barbican. We quickly discovered that it was hurting Magnum’s adoption by 
multiple cloud operators that were reluctant to add the Barbican service in 
order to add Magnum. So, we built internal certificate storage to decouple 
Magnum from Barbican. It’s even less secure than using Barbican with Simple 
Crypto, but it solved our adoption problem. Furthermore, that’s how most clouds 
are using Magnum, because they still don’t run Barbican.

Bottom line: As long as cloud operators have any reluctance to adopt Barbican, 
other community projects will avoid depending on it, even when it’s the right 
technical solution.

Regards,

Adrian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [magnum] Managing cluster drivers as individual distro packages

2017-01-03 Thread Adrian Otto
Team,

We discussed this in today’s team meeting:

http://eavesdrop.openstack.org/meetings/containers/2017/containers.2017-01-03-16.00.html

Our consensus was to start iterating on this in-tree and break it out later 
into a separate repo once we have reasonably mature drivers, and/or further 
guidance form the TC about handling drivers.

Adrian

On Nov 26, 2016, at 11:31 PM, Yatin Karel 
> wrote:

Hi,

As it will helpful in adoption of Magnum so it's good to seperate drivers 
somehow and make addition/management of new/current cluster drivers easier.

>From Developer's(refering just Myself) point of view i think current approach 
>is Ok as we can manage everything at one place and from Operator perspective i 
>think it should be easier to add/disable drivers.
Keeping above points in mind i think for now we should consider more on current 
contrib drivers development process, as this will lead to how other drivers 
would be developed/added in magnum later on. Currently we have three contrib 
drivers under development:- 
k8s_opensuse_v1/dcos_centos_v1/dcos_centos_ironic_v1. So we can target atleast 
finalizing process for their addition to some extent in this cycle.

Should we focus more on adding new contrib drivers now.  I think Adding new 
cluster drivers should be made more easier and independent whether by 
documenting or by other means. As i believe disabling can just be done by 
updating setup.cfg or manum.conf. May be we can provide some option for 
disabling drivers without manully updating config/setup files.


<< 1. in-tree:  remove the entrypoints from magnum/setup.cfg to not install them
by default. This will require some plumbing to manage them like separate python
packages, but allows magnum's development team to manage the official drivers
inside the service repo.


For this approach, what if we add drivers automatically as it is right now. And 
update doc for operators who want to disable some/all automatically installed 
drivers and on how they can add their custom drivers. I think Murali was 
working on this(process for adding new contrib drivers) so he might have some 
idea on this.


<< 2. separate repo: This option sounds cleaner, but requires more refactoring 
and
will separate more the drivers from service, having significant impact in the
development process.

Yes, This sounds more cleaner but seems not necessary now.
Agree with Ricardo for not moving to this approach now as Drago's concern is 
also valid and it would be difficult to handle that along with other high 
priorities tasks in the Ocata Cycle. So it can be revisited later when we have 
defined process for new drivers and we have more drivers in-tree.


Regards
Yatin Karel

From: Spyros Trigazis [strig...@gmail.com]
Sent: Friday, November 18, 2016 8:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Managing cluster drivers as individual distro 
packages

Hi all,

In magnum, we implement cluster drivers for the different combinations
of COEs (Container Orchestration Engines) and Operating Systems. The
reasoning behind it is to better encapsulate driver-specific logic and to allow
operators deploy custom drivers with their deployment specific changes.

For example, operators might want to:
* have only custom drivers and not install the upstream ones at all
* offer user only some of the available drivers
* create different combinations of  COE + os_distro
* create new experimental/staging drivers

It would be reasonable to manage magnum's cluster drivers as different
packages, since they are designed to be treated as individual entities. To do
so, we have two options:

1. in-tree:  remove the entrypoints from magnum/setup.cfg to not install them
by default. This will require some plumbing to manage them like separate python
packages, but allows magnum's development team to manage the official drivers
inside the service repo.

2. separate repo: This option sounds cleaner, but requires more refactoring and
will separate more the drivers from service, having significant impact in the
development process.

Thoughts?

Spyros
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Magnum_driver

2016-11-14 Thread Adrian Otto
Ruben,

I found the following two reviews:

https://review.openstack.org/397150 Magnum_driver for congress
https://review.openstack.org/397151 Test for magnum_driver

Are these what you are referring to, or is it something else?

Thanks,

Adrian


> On Nov 14, 2016, at 4:13 AM, Ruben  wrote:
> 
> Hi everybody,
> I've added the magnum_driver code, that I'm trying to write for congress, to 
> review.
> I think that I've made some errors.
> 
> I hope in your help.
> Ruben
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-13 Thread Adrian Otto
Jaycen and Yatin,

You have each been added as new core reviewers. Congratulations to you both, 
and thanks for stepping up to take on this new role!

Cheers,

Adrian

> On Nov 7, 2016, at 11:06 AM, Adrian Otto <adrian.o...@rackspace.com> wrote:
> 
> Magnum Core Team,
> 
> I propose Jaycen Grant (jvgrant) and Yatin Karel (yatin) as new Magnum Core 
> Reviewers. Please respond with your votes.
> 
> Thanks,
> 
> Adrian Otto


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] New Core Reviewers

2016-11-07 Thread Adrian Otto
Magnum Core Team,

I propose Jaycen Grant (jvgrant) and Yatin Karel (yatin) as new Magnum Core 
Reviewers. Please respond with your votes.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][octavia][tacker][designate][manila][sahara][magnum][infra] servicevm working group meetup

2016-10-25 Thread Adrian Otto
Thank Doug for organizing this. In our session, I mentioned that Magnum is 
working through some of the issues now, and will touch on a portion of these 
concerns in one of our sessions:

Thursday, October 27, 2:40pm-3:20pm
CCIB - P1 - Room 124/125 

https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16953

If you have time to join us, we’d love your input.

Thanks,

Adrian

On Oct 24, 2016, at 8:23 PM, Doug Wiegley 
> wrote:

As part of a requirements mailing list thread [1], the idea of a servicevm 
working group, or a common framework for reference openstack service VMs, came 
up. It's too late to get onto the official schedule, but unofficially, let's 
meet here:

When: Tuesday, 1:30pm-2:10pm
Where: CCIB P1 Room 128

If this is too short notice, then we can retry on Friday.

Thanks,
doug

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-October/105861.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][octavia][tacker][designate][manila][sahara][magnum][infra] servicevm working group meetup

2016-10-25 Thread Adrian Otto
I plan to attend, but may be a few minutes late. Apologies in advance.

Adrian

> On Oct 24, 2016, at 8:23 PM, Doug Wiegley  
> wrote:
> 
> As part of a requirements mailing list thread [1], the idea of a servicevm 
> working group, or a common framework for reference openstack service VMs, 
> came up. It's too late to get onto the official schedule, but unofficially, 
> let's meet here:
> 
> When: Tuesday, 1:30pm-2:10pm
> Where: CCIB P1 Room 128
> 
> If this is too short notice, then we can retry on Friday.
> 
> Thanks,
> doug
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-dev/2016-October/105861.html
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Magnum Sessions for Barcelona Summit Attendees

2016-10-24 Thread Adrian Otto
Team,

For those of you attending the Barcelona summit this week, please add the 
following sessions to your calendar, in addition to the Containers track:

https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Magnum%3A<https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Magnum:>

If there are any additional topics you would like to cover with the team, 
please add them to our Friday afternoon meetup:

https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/17216

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] PTL Candidacy

2016-09-16 Thread Adrian Otto
I announce my candidacy for, and respectfully request your support to serve as 
your Magnum PTL for the Ocata release cycle.

Here are are my achievements and OpenStack experience and that make me the 
best choice for this role:

* Founder of the OpenStack Containers Team
* Established vision and specification for Magnum
* Founding PTL for Magnum
* Core reviewer since the first line of code was contributed in Nov 2014
* Added Magnum to OpenStack
* Led numerous mid cycle meetups as PTL
* 3 terms of experience as elected PTL for Solum
* Involved with OpenStack since Austin Design Summit in 2010

What background and skills help me to serve in this role well:

* Over 20 years of experience in technical leadership positions
* Unmatched experience leading multi-organization collaborations
* Diplomacy skills for inclusion of numerous viewpoints
* Ability to drive consensus and shared vision
* Considerable experience in public speaking, including multiple keynotes at 
OpenStack Summits, and numerous appearances at other events.
* Leadership of collaborative OpenStack design summit sessions
* Deep belief in Open Source, Open Development, Open Design, and Open Community
* I love OpenStack and I love using containers

During the newton cycle I pushed for important changes we made as a team:
* Advocated for a more focused mission for Magnum, resulting in project Zun.
* Advocated for renaming our Bay resource to Cluster.

During the Ocata cycle we will work together on enabling the next generation
of applications with a design for complex clusters, and better integrating
our clusters with OpenStack services for networking and storage.

I am only running for PTL for one project, because I want Magnum to be my
primary focus.  I look forward to your vote, and continued success together.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Adrian Otto
I am struggling to understand why we would want to remove projects from our big 
tent at all, as long as they are being actively developed under the principles 
of "four opens". It seems to me that working to disqualify such projects sends 
an alarming signal to our ecosystem. The reason we made the big tent to begin 
with was to set a tone of inclusion. This whole discussion seems like a step 
backward. What problem are we trying to solve, exactly?

If we want to have tags to signal team diversity, that's fine. We do that now. 
But setting arbitrary requirements for big tent inclusion based on who 
participates definitely sounds like a mistake.

--
Adrian

> On Aug 1, 2016, at 5:11 AM, Sean Dague  wrote:
> 
>> On 07/31/2016 02:29 PM, Doug Hellmann wrote:
>> Excerpts from Steven Dake (stdake)'s message of 2016-07-31 18:17:28 +:
>>> Kevin,
>>> 
>>> Just assessing your numbers, the team:diverse-affiliation tag covers what
>>> is required to maintain that tag.  It covers more then core reviewers -
>>> also covers commits and reviews.
>>> 
>>> See:
>>> https://github.com/openstack/governance/blob/master/reference/tags/team_div
>>> erse-affiliation.rst
>>> 
>>> 
>>> I can tell you from founding 3 projects with the team:diverse-affiliation
>>> tag (Heat, Magnum, Kolla) team:deverse-affiliation is a very high bar to
>>> meet.  I don't think its wise to have such strict requirements on single
>>> vendor projects as those objectively defined in team:diverse-affiliation.
>>> 
>>> But Doug's suggestion of timelines could make sense if the timelines gave
>>> plenty of time to meet whatever requirements make sense and the
>>> requirements led to some increase in diverse affiliation.
>> 
>> To be clear, I'm suggesting that projects with team:single-vendor be
>> given enough time to lose that tag. That does not require them to grow
>> diverse enough to get team:diverse-affiliation.
> 
> The idea of 3 cycles to loose the single-vendor tag sounds very
> reasonable to me. This also is very much along the spirit of the tag in
> that it should be one of the top priorities of the team to work on this.
> I'd be in favor.
> 
>-Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [container] [docker] [magnum] [zun] nova-docker alternatives ?

2016-07-29 Thread Adrian Otto
s/mentally/centrally/

Autocorrect is not my friend.

On Jul 29, 2016, at 11:26 AM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:

Yasmin,

One option you have is to use the libvirt-lxc nova virt driver, and use an 
image that has a docker daemon installed on it. That would give you a way to 
place docker containers on a data plane the uses no virtualization, but you 
need to individually manage each instance. Another option is to add Magnum to 
your cloud (with or without a libvirt-lxc nova virt driver) and use Magnum to 
mentally manage each container cluster. We refer to such clusters as bays.

Adrian

On Jul 29, 2016, at 11:01 AM, Yasemin DEMİRAL (BİLGEM BTE) 
<yasemin.demi...@tubitak.gov.tr<mailto:yasemin.demi...@tubitak.gov.tr>> wrote:


nova-docker is a dead project, i learned irc channel.
I need the hypervisor for nova, and I cant installation nova-docker in physical 
openstack systems. In devstack, I could deploy nova-docker.
What can I do ? openstack-magnum or openstack-zun project is useful for me ?? I 
dont know.
Do you have any ideas ?

Yasemin Demiral
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [container] [docker] [magnum] [zun] nova-docker alternatives ?

2016-07-29 Thread Adrian Otto
Yasmin,

One option you have is to use the libvirt-lxc nova virt driver, and use an 
image that has a docker daemon installed on it. That would give you a way to 
place docker containers on a data plane the uses no virtualization, but you 
need to individually manage each instance. Another option is to add Magnum to 
your cloud (with or without a libvirt-lxc nova virt driver) and use Magnum to 
mentally manage each container cluster. We refer to such clusters as bays.

Adrian

On Jul 29, 2016, at 11:01 AM, Yasemin DEMİRAL (BİLGEM BTE) 
> wrote:


nova-docker is a dead project, i learned irc channel.
I need the hypervisor for nova, and I cant installation nova-docker in physical 
openstack systems. In devstack, I could deploy nova-docker.
What can I do ? openstack-magnum or openstack-zun project is useful for me ?? I 
dont know.
Do you have any ideas ?

Yasemin Demiral
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Support for bay rollback may break magnum API backward compatibility

2016-07-27 Thread Adrian Otto

On Jul 27, 2016, at 1:26 PM, Hongbin Lu 
> wrote:

Here is the guideline to evaluate an API change: 
http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html
 . In particular, I highlight the followings:

"""
The following types of changes are acceptable when conditionally added as a new 
API extension:
* Adding an optional property to a resource representation which may be 
supplied by clients, assuming the API previously would ignore this property.
* …
The following types of changes are generally not considered acceptable:
* A change such that a request which was successful before now results in an 
error response
* Changing the semantics of a property in a resource representation which may 
be supplied by clients.
* …
"""

Above all, as Ton mentioned, just adding a new option (--rollback) looks OK. 
However, the implementation should not break the existing behaviors. In 
particular, the proposed patch 
(https://review.openstack.org/#/c/343478/4/magnum/api/controllers/v1/bay.py) 
changes the request parameters and their types, which is considered to be 
unacceptable (unless bumping the microversion). To deal with that, I think 
there are two options:
1. Modify the proposed patch to make it backward-compatible. In particular, it 
should keep the existing properties as is (don’t change their types and 
semantics). The new option should be optional and it should be ignored if 
clients are sending the old requests.

Use the #1 approach above, please.

2. Keep the proposed patch as is, but bumping the microversion. You need to 
wait for this patch [1] to merge, and reference the microversion guide [1] to 
bump the version. In addition, it is highly recommended to follow the standard 
deprecation policy [2]. That means i) print a deprecated warning if old APIs 
are used, ii) document how to migrate from old APIs to new APIs, and iii) 
remove the old APIs after the deprecation period.

You can do this as well, but please don’t consider this an OR choice.

Adrian


[1] 
https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
[2] 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: July-27-16 9:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Support for bay rollback may break magnum 
API backward compatibility


Hi Wenzhi,
Looks like you are adding the new --rollback option to bay-update. If the user 
does not specify this new option,
then bay-update behaves the same as before; in other words, if it fails, then 
the state of the bay will be left
in the partially updated mode. Is this correct? If so, this does change the 
API, but does not seem to break
backward compatibility.
Ton Ngo,

"Wenzhi Yu (yuywz)" ---07/27/2016 04:13:07 AM---Hi folks, I am 
working on a patch [1] to add bay rollback machanism on update failure. But it 
seems

From: "Wenzhi Yu (yuywz)" >
To: "openstack-dev" 
>
Date: 07/27/2016 04:13 AM
Subject: [openstack-dev] [magnum] Support for bay rollback may break magnum API 
backward compatibility





Hi folks,

I am working on a patch [1] to add bay rollback machanism on update failure. 
But it seems to break magnum API
backward compatibility.

I'm not sure how to deal with this, can you please give me your suggestion? 
Thanks!

[1]https://review.openstack.org/#/c/343478/

2016-07-27


Best Regards,
Wenzhi Yu 
(yuywz)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Support for bay rollback may break magnum API backward compatibility

2016-07-27 Thread Adrian Otto

On Jul 27, 2016, at 1:26 PM, Hongbin Lu 
> wrote:

Here is the guideline to evaluate an API change: 
http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html
 . In particular, I highlight the followings:

"""
The following types of changes are acceptable when conditionally added as a new 
API extension:
* Adding an optional property to a resource representation which may be 
supplied by clients, assuming the API previously would ignore this property.
* …
The following types of changes are generally not considered acceptable:
* A change such that a request which was successful before now results in an 
error response
* Changing the semantics of a property in a resource representation which may 
be supplied by clients.
* …
"""

Above all, as Ton mentioned, just adding a new option (--rollback) looks OK. 
However, the implementation should not break the existing behaviors. In 
particular, the proposed patch 
(https://review.openstack.org/#/c/343478/4/magnum/api/controllers/v1/bay.py) 
changes the request parameters and their types, which is considered to be 
unacceptable (unless bumping the microversion). To deal with that, I think 
there are two options:
1. Modify the proposed patch to make it backward-compatible. In particular, it 
should keep the existing properties as is (don’t change their types and 
semantics). The new option should be optional and it should be ignored if 
clients are sending the old requests.

Use the #1 approach above, please.

2. Keep the proposed patch as is, but bumping the microversion. You need to 
wait for this patch [1] to merge, and reference the microversion guide [1] to 
bump the version. In addition, it is highly recommended to follow the standard 
deprecation policy [2]. That means i) print a deprecated warning if old APIs 
are used, ii) document how to migrate from old APIs to new APIs, and iii) 
remove the old APIs after the deprecation period.

You can do this as well, but please don’t consider this an OR choice.

Adrian


[1] 
https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
[2] 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: July-27-16 9:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Support for bay rollback may break magnum 
API backward compatibility


Hi Wenzhi,
Looks like you are adding the new --rollback option to bay-update. If the user 
does not specify this new option,
then bay-update behaves the same as before; in other words, if it fails, then 
the state of the bay will be left
in the partially updated mode. Is this correct? If so, this does change the 
API, but does not seem to break
backward compatibility.
Ton Ngo,

"Wenzhi Yu (yuywz)" ---07/27/2016 04:13:07 AM---Hi folks, I am 
working on a patch [1] to add bay rollback machanism on update failure. But it 
seems

From: "Wenzhi Yu (yuywz)" >
To: "openstack-dev" 
>
Date: 07/27/2016 04:13 AM
Subject: [openstack-dev] [magnum] Support for bay rollback may break magnum API 
backward compatibility





Hi folks,

I am working on a patch [1] to add bay rollback machanism on update failure. 
But it seems to break magnum API
backward compatibility.

I'm not sure how to deal with this, can you please give me your suggestion? 
Thanks!

[1]https://review.openstack.org/#/c/343478/

2016-07-27


Best Regards,
Wenzhi Yu 
(yuywz)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Select our project mascot/logo

2016-07-25 Thread Adrian Otto
How about a shark? Something along these lines:

http://www.logoground.com/logo.php?id=10554



On Jul 25, 2016, at 3:54 PM, Hongbin Lu  wrote:

Hi team,

OpenStack want to promote individual projects by choosing a mascot to represent 
the project. The idea is to create a family of logos for OpenStack projects 
that are unique, yet immediately identifiable as part of OpenStack. OpenStack 
will be using these logos to promote each project on the OpenStack website, at 
the Summit and in marketing materials.

We can select our own mascot, and then OpenStack will have an illustrator 
create the logo for us. The mascot can be anything from the natural world—an 
animal, fish, plant, or natural feature such as a mountain or waterfall. We 
need to select our top mascot candidates by the first deadline (July 27, this 
Wednesday). There’s more info on the website: 
http://www.openstack.org/project-mascots

Action Item: Everyone please let me know what is your favorite mascot. You can 
either reply to this ML or discuss it in the next team meeting.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Spyros Trigazis for Magnum core reviewer team

2016-07-24 Thread Adrian Otto
+1

--
Adrian

On Jul 22, 2016, at 10:28 AM, Hongbin Lu 
> wrote:

Hi all,

Spyros has consistently contributed to Magnum for a while. In my opinion, what 
differentiate him from others is the significance of his contribution, which 
adds concrete value to the project. For example, the operator-oriented install 
guide he delivered attracts a significant number of users to install Magnum, 
which facilitates the adoption of the project. I would like to emphasize that 
the Magnum team has been working hard but struggling to increase the adoption, 
and Spyros’s contribution means a lot in this regards. He also completed 
several essential and challenging tasks, such as adding support for OverlayFS, 
adding Rally job for Magnum, etc. In overall, I am impressed by the amount of 
high-quality patches he submitted. He is also helpful in code reviews, and his 
comments often help us identify pitfalls that are not easy to identify. He is 
also very active in IRC and ML. Based on his contribution and expertise, I 
think he is qualified to be a Magnum core reviewer.

I am happy to propose Spyros to be a core reviewer of Magnum team. According to 
the OpenStack Governance process [1], we require a minimum of 4 +1 votes from 
Magnum core reviewers within a 1 week voting window (consider this proposal as 
a +1 vote from me). A vote of -1 is a veto. If we cannot get enough votes or 
there is a veto vote prior to the end of the voting window, Spyros is not able 
to join the core team and needs to wait 30 days to reapply.

The voting is open until Thursday July 29st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-09 Thread Adrian Otto
Rackspace is willing to host in Austin, TX or San Antonio, TX, or San 
Francisco, CA.

--
Adrian

On Jun 7, 2016, at 1:35 PM, Hongbin Lu 
> wrote:

Hi all,

Please find the Doodle pool below for selecting the Magnum midcycle date. 
Presumably, it will be a 2 days event. The location is undecided for now. The 
previous midcycles were hosted in bay area so I guess we will stay there at 
this time.

http://doodle.com/poll/5tbcyc37yb7ckiec

In addition, the Magnum team is finding a host for the midcycle. Please let us 
know if you interest to host us.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Adrian Otto
I am really struggling to accept the idea of heterogeneous clusters. My 
experience causes me to question whether a heterogeneus cluster makes sense for 
Magnum. I will try to explain why I have this hesitation:

1) If you have a heterogeneous cluster, it suggests that you are using external 
intelligence to manage the cluster, rather than relying on it to be 
self-managing. This is an anti-pattern that I refer to as “pets" rather than 
“cattle”. The anti-pattern results in brittle deployments that rely on external 
intelligence to manage (upgrade, diagnose, and repair) the cluster. The 
automation of the management is much harder when a cluster is heterogeneous.

2) If you have a heterogeneous cluster, it can fall out of balance. This means 
that if one of your “important” or “large” members fail, there may not be 
adequate remaining members in the cluster to continue operating properly in the 
degraded state. The logic of how to track and deal with this needs to be 
handled. It’s much simpler in the heterogeneous case.

3) Heterogeneous clusters are complex compared to homogeneous clusters. They 
are harder to work with, and that usually means that unplanned outages are more 
frequent, and last longer than they with a homogeneous cluster.

Summary:

Heterogeneous:
  - Complex
  - Prone to imbalance upon node failure
  - Less reliable

Heterogeneous:
  - Simple
  - Don’t get imbalanced when a min_members concept is supported by the cluster 
controller
  - More reliable

My bias is to assert that applications that want a heterogeneous mix of system 
capacities at a node level should be deployed on multiple homogeneous bays, not 
a single heterogeneous one. That way you end up with a composition of simple 
systems rather than a larger complex one.

Adrian


> On Jun 1, 2016, at 3:02 PM, Hongbin Lu  wrote:
> 
> Personally, I think this is a good idea, since it can address a set of 
> similar use cases like below:
> * I want to deploy a k8s cluster to 2 availability zone (in future 2 
> regions/clouds).
> * I want to spin up N nodes in AZ1, M nodes in AZ2.
> * I want to scale the number of nodes in specific AZ/region/cloud. For 
> example, add/remove K nodes from AZ1 (with AZ2 untouched).
> 
> The use case above should be very common and universal everywhere. To address 
> the use case, Magnum needs to support provisioning heterogeneous set of nodes 
> at deploy time and managing them at runtime. It looks the proposed idea 
> (manually managing individual nodes or individual group of nodes) can address 
> this requirement very well. Besides the proposed idea, I cannot think of an 
> alternative solution.
> 
> Therefore, I vote to support the proposed idea.
> 
> Best regards,
> Hongbin
> 
>> -Original Message-
>> From: Hongbin Lu
>> Sent: June-01-16 11:44 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Hi team,
>> 
>> A blueprint was created for tracking this idea:
>> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
>> nodes . I won't approve the BP until there is a team decision on
>> accepting/rejecting the idea.
>> 
>> From the discussion in design summit, it looks everyone is OK with the
>> idea in general (with some disagreements in the API style). However,
>> from the last team meeting, it looks some people disagree with the idea
>> fundamentally. so I re-raised this ML to re-discuss.
>> 
>> If you agree or disagree with the idea of manually managing the Heat
>> stacks (that contains individual bay nodes), please write down your
>> arguments here. Then, we can start debating on that.
>> 
>> Best regards,
>> Hongbin
>> 
>>> -Original Message-
>>> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
>>> Sent: May-16-16 5:28 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>>> managing the bay nodes
>>> 
>>> The discussion at the summit was very positive around this
>> requirement
>>> but as this change will make a large impact to Magnum it will need a
>>> spec.
>>> 
>>> On the API of things, I was thinking a slightly more generic approach
>>> to incorporate other lifecycle operations into the same API.
>>> Eg:
>>> magnum bay-manage  
>>> 
>>> magnum bay-manage  reset –hard
>>> magnum bay-manage  rebuild
>>> magnum bay-manage  node-delete  magnum bay-manage
>>>  node-add –flavor  magnum bay-manage  node-reset
>>>  magnum bay-manage  node-list
>>> 
>>> Tom
>>> 
>>> From: Yuanying OTSUKA 
>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>> questions)" 
>>> Date: Monday, 16 May 2016 at 01:07
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of 

Re: [openstack-dev] [magnum][lbaas] Operator-facing installation guide

2016-06-02 Thread Adrian Otto
Brandon,

Magnum uses neutron’s LBaaS service to allow for multi-master bays. We can 
balance connections between multiple kubernetes masters, for example. It’s not 
needed for single master bays, which are much more common. We have a blueprint 
that is in design stage for de-coupling magnum from neutron LBaaS for use cases 
that don’t require it:

https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas

Adrian

> On Jun 2, 2016, at 2:48 PM, Brandon Logan  wrote:
> 
> Call me ignorance, but I'm surprised at neutron-lbaas being a dependency
> of magnum.  Why is this?  Sorry if it has been asked before and I've
> just missed that answer?
> 
> Thanks,
> Brandon
> On Wed, 2016-06-01 at 14:39 +, Hongbin Lu wrote:
>> Hi lbaas team,
>> 
>> 
>> 
>> I wonder if there is an operator-facing installation guide for
>> neutron-lbaas. I asked that because Magnum is working on an
>> installation guide [1] and neutron-lbaas is a dependency of Magnum. We
>> want to link to an official lbaas guide so that our users will have a
>> completed instruction. Any pointer?
>> 
>> 
>> 
>> [1] https://review.openstack.org/#/c/319399/
>> 
>> 
>> 
>> Best regards,
>> 
>> Hongbin
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-25 Thread Adrian Otto

> On May 25, 2016, at 12:43 PM, Ben Swartzlander  wrote:
> 
> On 05/25/2016 06:48 AM, Sean Dague wrote:
>> I've been watching the threads, trying to digest, and find the way's
>> this is getting sliced doesn't quite slice the way I've been thinking
>> about it. (which might just means I've been thinking about it wrong).
>> However, here is my current set of thoughts on things.
>> 
>> 1. Should OpenStack be open to more languages?
>> 
>> I've long thought the answer should be yes. Especially if it means we
>> end up with keystonemiddleware, keystoneauth, oslo.config in other
>> languages that let us share elements of infrastructure pretty
>> seamlessly. The OpenStack model of building services that register in a
>> service catalog and use common tokens for permissions through a bunch of
>> services is quite valuable. There are definitely people that have Java
>> applications that fit into the OpenStack model, but have no place to
>> collaborate on them.
>> 
>> (Note: nothing about the current proposal goes anywhere near this)
>> 
>> 2. Is Go a "good" language to add to the community?
>> 
>> Here I am far more mixed. In programming language time, Go is super new.
>> It is roughly the same age as the OpenStack project. The idea that Go and
>> Python programmers overlap seems to be because some shops that used
>> to do a lot in Python, now do some things in Go.
>> 
>> But when compared to other languages in our bag, Javascript, Bash. These
>> are things that go back 2 decades. Unless you have avoided Linux or the
>> Web successfully for 2 decades, you've done these in some form. Maybe
>> not being an expert, but there is vestigial bits of knowledge there. So
>> they *are* different. In the same way that C or Java are different, for
>> having age. The likelihood of finding community members than know Python
>> + one of these is actually *way* higher than Python + Go, just based on
>> duration of existence. In a decade that probably won't be true.
> 
> Thank you for bringing up this point. My major concern boils down to the 
> likelihood that Go will never be well understood by more than a small subset 
> of the community. (When I say "well understood" I mean years of experiences 
> with thousands of lines of code -- not "I can write hello world").
> 
> You expect this problem to get better in the future -- I expect this problem 
> to get worse. Not all programming languages survive. Google for "dead 
> programming languages" some time and you'll find many examples. The problem 
> is that it's never obvious when the languages are young that something more 
> popular will come along and kill a language.
> 
> I don't want to imply that Golang is especially likely to die any time soon. 
> But every time you add a new language to a community, you increase the *risk* 
> that one of the programming languages used by the community will eventually 
> fall out of popularity, and it will become hard or impossible to find people 
> to maintain parts of the code.
> 
> I tend to take a long view of software lifecycles, having witnessed the death 
> of projects due to bad decisions before. Does anyone expect OpenStack to 
> still be around in 10 years? 20 years? What is the likelihood that both 
> Python and Golang are both still popular languages then? I guarantee [1] that 
> it's lower than the likelihood that only Python is still a popular language.
> 
> Adding a new language adds risk that new contributors won't understand some 
> parts of the code. Period. It doesn't matter what the language is.
> 
> My proposed solution is to draw the community line at the language barrier 
> line. People in this community are expected to understand Python. Anyone can 
> start other communities, and they can overlap with ours, but let's make it 
> clear that they're not the same.

Take all the names of the programming languages out for a moment here. The 
point is not that one is any more appropriate than another. In order to evolve, 
OpenStack must allow alternatives. It sets us up for long term success. 
Evolution is gradual change. Will we ever need to refactor things from one 
language to another, or have the same API implemented in two languages? Sure. 
That’s fine. Optimize for a long term outcome, not short term efficiencies. 
Twenty years from now if OpenStack still has a  “Python only” attitude, I’m 
sure it will be totally and utterly irrelevant. We will have all moved on by 
then. Let’s get this right, and offer individual projects freedom to do what 
they feel is best. Have a selection of designated languages, and rationale for 
why to stick with whichever one is preferred at a point in time.

Adrian

> 
> -Ben Swartzlander
> 
> [1] For all X, Y in (0, 1): X * Y < X
> 
>> 3. Are there performance problems where python really can't get there?
>> 
>> This seems like a pretty clear "yes". It shouldn't be surprising. Python
>> has no jit (yes there is pypy, but it's compat story isn't here). There
>> is 

Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-24 Thread Adrian Otto

> On May 24, 2016, at 12:09 PM, Mike Perez  wrote:
> 
> On 12:24 May 24, Thierry Carrez wrote:
>> Morgan Fainberg wrote:
>>> [...]  If we are accepting golang, I want it to be clearly
>>> documented that the expectation is it is used exclusively where there is
>>> a demonstrable case (such as with swift) and not a carte blanche to use
>>> it wherever-you-please.
>>> 
>>> I want this to be a social contract looked at and enforced by the
>>> community, not special permissions that are granted by the TC (I don't
>>> want the TC to need to step in an approve every single use case of
>>> golang, or javascript ...). It's bottlenecking back to the TC for
>>> special permissions or inclusion (see reasons for the dissolution of the
>>> "integrated release").
>>> 
>>> This isn't strictly an all or nothing case, this is a "how would we
>>> enforce this?" type deal. Lean on infra to enforce that only projects
>>> with the golang-is-ok-here tag are allowed to use it? I don't want
>>> people to write their APIs in javascript (and node.js) nor in golang. I
>>> would like to see most of the work continue with python as the primary
>>> language. I just think it's unreasonable to lock tools behind a gate
>>> that is stronger than the social / community contract (and outlined in
>>> the resolution including X language).
>> 
>> +1
>> 
>> I'd prefer if we didn't have to special-case anyone, and we could come up
>> with general rules that every OpenStack project follows. Any other solution
>> is an administrative nightmare and a source of tension between projects (why
>> are they special and not me).
> 
> I'm in agreement that I don't want to see the TC enforcing this. In fact as
> Thierry has said, lets not special case anyone.
> 
> As soon as a special case is accepted, as nortoriously happens people are 
> going
> to go in a corner and rewrite things in Go. They will be upset later for not
> communicating well on their intentions upfront, and the TC or a few strongly
> opinionated folks in the community are going to be made the bad people just
> about every time.
> 
> Community enforcing or not, I predict this to get out of hand and it's going 
> to
> create more community divide regardless.

I remember in 2010, our founding intent was to converge on two languages for 
OpenStack Development: Python and C. We would prefer Python for things like 
control plane API services, and when needed for performance or other reasons, 
we would use C as an alternative. To my knowledge, since then nothing was ever 
written in C. We have a clear trend of high performance alternative solutions 
showing up in Golang. So, I suggest we go back to the original intent that we 
build things in Python as our preference, and allow teams to select a 
designated alternative when they have their own reasons to do that. I see no 
reason why that designated alternative can not be Golang[1].

Programming syles and languages evolve over time. Otherwise we’d all still be 
using FORTRAN from 1954. OpenStack, as a community needs to have a deliberate 
plan for how to track such evolution. Digging our heels in with a Python only 
attitude is not progressive enough. Giving choice of any option under the sun 
is not practical. We will strike a balance. Recognize that evolution requires 
duplication. There must be overlap (wasted effort maintaining common code in 
multiple languages) in order to allow evolution. This overlap is healthy. We 
don’t need our TC to decide when a project should be allowed to use a 
designated alternative. Set rules that allow for innovation, and then let 
projects decide on their own within such guidelines.

My proposal:

"Openstack projects shall use Python as the preferred programming language. 
Golang may be used as an alternative if the project leadership decides it is 
justified."

Additional (non-regulatory) guidance can also be offered by the OpenStack 
community to indicate when individual projects should decide to use an 
alternative language. In the future as we notice evolution around us, we may 
add other alternatives to that list.

Thanks,

Adrian

[1] I categorically reject the previous rhetoric that casts Golang as a 
premature language that we can’t rely on. That’s FUD, plain and simple. Stop it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-05-24 Thread Adrian Otto
Before considering a project rename.m, I suggest you seek guidance from the 
OpenStack technical committee, and/or the OpenStack-infra team. There is 
probably a simple workaround to the concern voiced below.

--
Adrian

> On May 24, 2016, at 1:37 AM, Shuu Mutou  wrote:
> 
> Hi all,
> 
> Unfortunately "higgins" is used by media server project on Launchpad and CI 
> software on PYPI. Now, we use "python-higgins" for our project on Launchpad.
> 
> IMO, we should rename project to prevent increasing points to patch.
> 
> How about "Gatling"? It's only association from Magnum. It's not used on both 
> Launchpad and PYPI.
> Is there any idea?
> 
> Renaming opportunity will come (it seems only twice in a year) on Friday, 
> June 3rd. Few projects will rename on this date.
> http://markmail.org/thread/ia3o3vz7mzmjxmcx
> 
> And if project name issue will be fixed, I'd like to propose UI subproject.
> 
> Thanks,
> Shu
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-16 Thread Adrian Otto

> On May 16, 2016, at 7:59 AM, Steven Dake (stdake)  wrote:
> 
> Tom,
> 
> Devil's advocate here.. :)
> 
> Can you offer examples of other OpenStack API services which behave in
> this way with a API?

The more common pattern is actually:

   

or:

  

Examples:

# trove resize-instance  
# nova reboot --hard 

The OSC tool uses:

   

Example:

# openstack server reboot [-h] [--hard | --soft] [--wait] 

If we wanted to be consistent with the original Openstack style, the proposal 
would be something like:

magnum reset [--hard]  
magnum rebuild 
magnum node-delete  []
magnum node-add [--flavor ]  
magnum node-reset  
magnum node-list 

If we wanted to model after OSC, it would be:

magnum bay reset [--hard]  
magnum bay rebuild 
magnum bay node delete  []
magnum bay node add [--flavor ]  
magnum bay node reset  
magnum bay node list 

This one is my preference, because when integrated with OSC, the user does not 
need to change the command arguments, just swap in “openstack” for “magnum”. 
The actual order of placement for named options does not matter.

Adrian

> 
> I'm struggling to think of any off the top of my head, but admittedly
> don't know all the ins and outs of OpenStack ;)
> 
> Thanks
> -steve
> 
> 
> On 5/16/16, 2:28 AM, "Cammann, Tom"  wrote:
> 
>> The discussion at the summit was very positive around this requirement
>> but as this change will make a large impact to Magnum it will need a spec.
>> 
>> On the API of things, I was thinking a slightly more generic approach to
>> incorporate other lifecycle operations into the same API.
>> Eg:
>> magnum bay-manage  
>> 
>> magnum bay-manage  reset –hard
>> magnum bay-manage  rebuild
>> magnum bay-manage  node-delete 
>> magnum bay-manage  node-add –flavor 
>> magnum bay-manage  node-reset 
>> magnum bay-manage  node-list
>> 
>> Tom
>> 
>> From: Yuanying OTSUKA 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Monday, 16 May 2016 at 01:07
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Hi,
>> 
>> I think, user also want to specify the deleting node.
>> So we should manage “node” individually.
>> 
>> For example:
>> $ magnum node-create —bay …
>> $ magnum node-list —bay
>> $ magnum node-delete $NODE_UUID
>> 
>> Anyway, if magnum want to manage a lifecycle of container infrastructure.
>> This feature is necessary.
>> 
>> Thanks
>> -yuanying
>> 
>> 
>> 2016年5月16日(月) 7:50 Hongbin Lu
>> >:
>> Hi all,
>> 
>> This is a continued discussion from the design summit. For recap, Magnum
>> manages bay nodes by using ResourceGroup from Heat. This approach works
>> but it is infeasible to manage the heterogeneity across bay nodes, which
>> is a frequently demanded feature. As an example, there is a request to
>> provision bay nodes across availability zones [1]. There is another
>> request to provision bay nodes with different set of flavors [2]. For the
>> request features above, ResourceGroup won’t work very well.
>> 
>> The proposal is to remove the usage of ResourceGroup and manually create
>> Heat stack for each bay nodes. For example, for creating a cluster with 2
>> masters and 3 minions, Magnum is going to manage 6 Heat stacks (instead
>> of 1 big Heat stack as right now):
>> * A kube cluster stack that manages the global resources
>> * Two kube master stacks that manage the two master nodes
>> * Three kube minion stacks that manage the three minion nodes
>> 
>> The proposal might require an additional API endpoint to manage nodes or
>> a group of nodes. For example:
>> $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2
>> --availability-zone us-east-1 ….
>> $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3
>> --availability-zone us-east-2 …
>> 
>> Thoughts?
>> 
>> [1] 
>> https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
>> [2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor
>> 
>> Best regards,
>> Hongbin
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe> tack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __

[openstack-dev] [magnum] Proposed Revision to Magnum's Mission

2016-04-29 Thread Adrian Otto
Magnum Team,

In accordance with our Fishbowl discussion yesterday at the Newton Design 
Summit in Austin, I have proposed the following revision to Magnum’s mission 
statement:

https://review.openstack.org/311476

The idea is to narrow the scope of our Magnum project to allow us to focus on 
making popular COE software work great with OpenStack, and make it easy for 
OpenStack cloud users to quickly set up fleets of cloud capacity managed by 
chosen COE software (such as Swam, Kubernetes, Mesos, etc.). Cloud operators 
and users will value Multi-Tenancy for COE’s, tight integration with OpenStack, 
and the ability to source this all as a self-service resource.

We agreed to deprecate and remove the /containers resource from Magnum’s API, 
and will leave the door open for a new OpenStack project with its own name and 
mission to satisfy the interests of our community members who want an OpenStack 
API service that abstracts one or more COE’s.

Regards,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-23 Thread Adrian Otto
Magnum is not a COE installer. It offers multi tenancy from the ground up, is 
well integrated with OpenStack services, and more COE features pre-configured 
than you would get with an ordinary stock deployment. For example, magnum 
offers integration with keystone that allows developer self-service to get a 
native container service in a few minutes with the same ease as getting a 
database server from Trove. It allows cloud operators to set up the COE 
templates in a way that they can be used to fit policies of that particular 
cloud.

Keeping a COE working with OpenStack requires expertise that the Magnum team 
has codified across multiple options.

--
Adrian

On Apr 23, 2016, at 2:55 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

I am not necessary agree with the viewpoint below, but that is the majority 
viewpoints when I was trying to sell Magnum to them. There are people who 
interested in adopting Magnum, but they ran away after they figured out what 
Magnum actually offers is a COE deployment service. My takeaway is COE 
deployment is not the real pain, and there are several alternatives available 
(Heat, Ansible, Chef, Puppet, Juju, etc.). Limiting Magnum to be a COE 
deployment service might prolong the existing adoption problem.

Best regards,
Hongbin

From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
Sent: April-20-16 6:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

If Magnum will be focused on installation and management for COE it will be 
unclear how much it is different from Heat and other generic orchestrations.  
It looks like most of the current Magnum functionality is provided by Heat. 
Magnum focus on deployment will potentially lead to another Heat-like  API.
Unless Magnum is really focused on containers its value will be minimal for 
OpenStack users who already use Heat/Orchestration.


On Wed, Apr 20, 2016 at 3:12 PM, Keith Bray 
<keith.b...@rackspace.com<mailto:keith.b...@rackspace.com>> wrote:
Magnum doesn¹t have to preclude tight integration for single COEs you
speak of.  The heavy lifting of tight integration of the COE in to
OpenStack (so that it performs optimally with the infra) can be modular
(where the work is performed by plug-in models to Magnum, not performed by
Magnum itself. The tight integration can be done by leveraging existing
technologies (Heat and/or choose your DevOps tool of choice:
Chef/Ansible/etc). This allows interested community members to focus on
tight integration of whatever COE they want, focusing specifically on the
COE integration part, contributing that integration focus to Magnum via
plug-ins, without having to actually know much about Magnum, but instead
contribute to the COE plug-in using DevOps tools of choice.   Pegging
Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
etc. project for every COE of interest, all with different ways of kicking
off COE management.  Magnum could unify that experience for users and
operators, without picking a winner in the COE space ‹ this is just like
Nova not picking a winner between VM flavors or OS types.  It just
facilitates instantiation and management of thins.  Opinion here:  The
value of Magnum is in being a light-weight/thin API, providing modular
choice and plug-ability to COE provisioning and management, thereby
providing operators and users choice of COE instantiation and management
(via the bay concept), where each COE can be as tightly or loosely
integrated as desired by different plug-ins contributed to perform the COE
setup and configurations.  So, Magnum could have two or more swarm plug-in
options contributed to the community.. One overlays generic swarm on VMs.
The other swarm plug-in could instantiate swarm tightly integrated to
neutron, keystone, etc on to bare metal.  Magnum just facilities a plug-in
model with thin API to offer choice of CEO instantiation and management.
The plug-in does the heavy lifting using whatever methods desired by the
curator.

That¹s my $0.2.

-Keith

On 4/20/16, 4:49 PM, "Joshua Harlow" 
<harlo...@fastmail.com<mailto:harlo...@fastmail.com>> wrote:

>Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap APIs with leaky abstractions. The
>>> lowest common denominator of all COEs is an remarkably low value API
>>> that adds considerable complexity to Magnum that will not
>>> strategically advance OpenStack. If we instead focus our effort on
>>> making the COEs work better on OpenStack, that would be a winning
>>> strategy. Support and compliment our various COE ecosystems.
>
>So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
>

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Adrian Otto

> On Apr 20, 2016, at 2:49 PM, Joshua Harlow <harlo...@fastmail.com> wrote:
> 
> Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap APIs with leaky abstractions. The
>>> lowest common denominator of all COEs is an remarkably low value API
>>> that adds considerable complexity to Magnum that will not
>>> strategically advance OpenStack. If we instead focus our effort on
>>> making the COEs work better on OpenStack, that would be a winning
>>> strategy. Support and compliment our various COE ecosystems.
> 
> So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
> COEs work better on OpenStack' but I do dislike the part about COEs (plural) 
> because it is once again the old non-opinionated problem that we (as a 
> community) suffer from.
> 
> Just my 2 cents, but I'd almost rather we pick one COE and integrate that 
> deeply/tightly with openstack, and yes if this causes some part of the 
> openstack community to be annoyed, meh, to bad. Sadly I have a feeling we are 
> hurting ourselves by continuing to try to be everything and not picking 
> anything (it's a general thing we, as a group, seem to be good at, lol). I 
> mean I get the reason to just support all the things, but it feels like we as 
> a community could just pick something, work together on figuring out how to 
> pick one, using all these bright leaders we have to help make that possible 
> (and yes this might piss some people off, to bad). Then work toward making 
> that something great and move on…

The key issue preventing the selection of only one COE is that this area is 
moving very quickly. If we would have decided what to pick at the time the 
Magnum idea was created, we would have selected Docker. If you look at it 
today, you might pick something else. A few months down the road, there may be 
yet another choice that is more compelling. The fact that a cloud operator can 
integrate services with OpenStack, and have the freedom to offer support for a 
selection of COE’s is a form of insurance against the risk of picking the wrong 
one. Our compute service offers a choice of hypervisors, our block storage 
service offers a choice of storage hardware drivers, our networking service 
allows a choice of network drivers. Magnum is following the same pattern of 
choice that has made OpenStack compelling for a very diverse community. That 
design consideration was intentional.

Over time, we can focus the majority of our effort on deep integration with 
COEs that users select the most. I’m convinced it’s still too early to bet the 
farm on just one choice.

Adrian

>> I'm with Adrian on that one. I've attended a lot of container-oriented
>> conferences over the past year and my main takeaway is that this new
>> crowd of potential users is not interested (at all) in an
>> OpenStack-specific lowest common denominator API for COEs. They want to
>> take advantage of the cool features in Kubernetes API or the versatility
>> of Mesos. They want to avoid caring about the infrastructure provider
>> bit (and not deploy Mesos or Kubernetes themselves).
>> 
>> Let's focus on the infrastructure provider bit -- that is what we do and
>> what the ecosystem wants us to provide.
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum]Cache docker images

2016-04-20 Thread Adrian Otto
Hongbin,

Both of approaches you suggested may only work for one binary format. If you 
try to use docker on a different system architecture, the pre-cache of images 
makes it even more difficult to get the correct images built and loaded.

I suggest we take an approach that allows the Baymodel creator to specify a 
docker registry and/or prefix that will determine where docker images are 
pulled from if they are not found in the local cache. That would give cloud 
operators the option to set up such a registry locally and populate it with the 
right images. This approach would also make it easier to customize the Magnum 
setup by tweaking the container images prior to use.

Thanks,

Adrian

On Apr 19, 2016, at 11:58 AM, Hongbin Lu 
> wrote:

Eli,

The approach of pre-pulling docker images has a problem. It only works for 
specific docker storage driver. In comparison, the tar file approach is 
portable across different storage drivers.

Best regards,
Hongbin

From: taget [mailto:qiaoliy...@gmail.com]
Sent: April-19-16 4:26 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum]Cache docker images

hi hello again

I believe you are talking about this bp 
https://blueprints.launchpad.net/magnum/+spec/cache-docker-images
then ignore my previous reply, that may another topic to solve network limited 
problem.

I think you are on the right way to build docker images but this image could 
only bootstrap by cloud-init, without cloud-init
the container image tar file are not loaded at all, but seems this may not be 
the best way.

I'v suggest that may be the best way is we pull docker images while building 
atomic-image. Per my understanding, the
image build process is we mount the image to read/write mode to some tmp 
directory and chroot to to that dircetory,
we can do some custome operation there.

I can do a try on the build progress(guess rpm-ostree should support some hook 
scripts)

On 2016?04?19? 11:41, Eli Qiao wrote:
@wanghua

I think there were some discussion already , check 
https://blueprints.launchpad.net/magnum/+spec/support-private-registry
and https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig
On 2016?04?19? 10:57, ?? wrote:
Hi all,

We want to eliminate pulling docker images over the Internet on bay 
provisioning. There are two problems of this approach:
1. Pulling docker images over the Internet is slow and fragile.
2. Some clouds don't have external Internet access.

It is suggested to build all the required images into the cloud images to 
resolved the issue.

Here is a solution:
We export the docker images as tar files, and put the tar files into a dir in 
the image when we build the image. And we add scripts to load the tar files in 
cloud-init, so that we don't need to download the docker images.

Any advice for this solution or any better solution?

Regards,
Wanghua




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best Regards, Eli Qiao (???)

Intel OTC China




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best Regards, Eli Qiao (???)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-19 Thread Adrian Otto
This pursuit is a trap. Magnum should focus on making native container APIs 
available. We should not wrap APIs with leaky abstractions. The lowest common 
denominator of all COEs is an remarkably low value API that adds considerable 
complexity to Magnum that will not strategically advance OpenStack. If we 
instead focus our effort on making the COEs work better on OpenStack, that 
would be a winning strategy. Support and compliment our various COE ecosystems.

Thanks,

Adrian

> On Apr 19, 2016, at 8:26 AM, Hongbin Lu <hongbin...@huawei.com> wrote:
> 
> Sorry, it is too late to adjust the schedule now, but I don't mind to have a 
> pre-discussion here. If you have opinions/ideas on this topic but cannot 
> attend the session [1], we'd like to have you inputs in this ML or in the 
> etherpad [2]. This will help to set the stage for the session.
> 
> For background, Magnum supports provisioning Container Orchestration Engines 
> (COEs), including Kubernetes, Docker Swarm and Apache Mesos, on top of Nova 
> instances. After the provisioning, users need to use the native COE APIs to 
> manage containers (and/or other COE resources). In the Austin summit, we will 
> have a session to discuss if it makes sense to build a common abstraction 
> layer for the supported COEs. If you think it is a good idea, it would be 
> great to elaborate the details. For example, answering the following 
> questions could be useful:
> * Which abstraction(s) you are looking for (i.e. container, pod)?
> * What are your use cases for the abstraction(s)?
> * How the native APIs provided by individual COEs doesn't satisfy your 
> requirements?
> 
> If you think it is a bad idea, I would love to hear your inputs as well:
> * Why it is bad?
> * If there is no common abstraction, how to address the pain of leveraging 
> native COE APIs as reported below?
> 
> [1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9102 
> [2] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
> 
> Best regards,
> Hongbin
> 
>> -Original Message-
>> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>> Sent: April-18-16 6:13 PM
>> To: OpenStack Development Mailing List (not for usage questions);
>> Flavio Percoco
>> Cc: foundat...@lists.openstack.org
>> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
>> One Platform - Containers/Bare Metal? (Re: Board of Directors Meeting)
>> 
>> I'd love to attend, but this is right on top of the app catalog meeting.
>> I think the app catalog might be one of the primary users of a cross
>> COE api.
>> 
>> At minimum we'd like to be able to be able to store url's for
>> Kubernetes/Swarm/Mesos templates and have an api to kick off a workflow
>> in Horizon to have Magnum start up a new instance of of the template
>> the user selected.
>> 
>> Thanks,
>> Kevin
>> 
>> From: Hongbin Lu [hongbin...@huawei.com]
>> Sent: Monday, April 18, 2016 2:09 PM
>> To: Flavio Percoco; OpenStack Development Mailing List (not for usage
>> questions)
>> Cc: foundat...@lists.openstack.org
>> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
>> One Platform - Containers/Bare Metal? (Re: Board of Directors Meeting)
>> 
>> Hi all,
>> 
>> Magnum will have a fishbowl session to discuss if it makes sense to
>> build a common abstraction layer for all COEs (kubernetes, docker swarm
>> and mesos):
>> 
>> https://www.openstack.org/summit/austin-2016/summit-
>> schedule/events/9102
>> 
>> Frankly, this is a controversial topic since I heard agreements and
>> disagreements from different people. It would be great if all of you
>> can join the session and share your opinions and use cases. I wish we
>> will have a productive discussion.
>> 
>> Best regards,
>> Hongbin
>> 
>>> -Original Message-----
>>> From: Flavio Percoco [mailto:fla...@redhat.com]
>>> Sent: April-12-16 8:40 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Cc: foundat...@lists.openstack.org
>>> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
>>> One Platform - Containers/Bare Metal? (Re: Board of Directors Meeting)
>>> 
>>>> On 11/04/16 16:53 +, Adrian Otto wrote:
>>>> Amrith,
>>>> 
>>>> I respect your point of view, and agree that the idea of a common
>>>> compute API is attractive... until you think a bit deeper about what
>>> that
>>>> would mean. We seriously considered a "global"

Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-12 Thread Adrian Otto
Please don't miss the point here. We are seeking a solution that allows a 
location to place a client side encrypted blob of data (A TLS cert) that 
multiple magnum-conductor processes on different hosts can reach over the 
network.

We *already* support using Barbican for this purpose, as well as storage in 
flat files (not as secure as Barbican, and only works with a single conductor) 
and are seeking a second alternative for clouds that have not yet adopted 
Barbican, and want to use multiple conductors. Once Barbican is common in 
OpenStack clouds, both alternatives are redundant and can be deprecated. If 
Keystone depends on Barbican, then we have no reason to keep using it. That 
will mean that Barbican is core to OpenStack.

Our alternative to using Keystone is storing the encrypted blobs in the Magnum 
database which would cause us to add an API feature in magnum that is the exact 
functional equivalent of the credential store in Keystone. That is something we 
are trying to avoid by leveraging existing OpenStack APIs.

--
Adrian

On Apr 12, 2016, at 3:44 PM, Dolph Mathews 
> wrote:


On Tue, Apr 12, 2016 at 3:27 PM, Lance Bragstad 
> wrote:
Keystone's credential API pre-dates barbican. We started talking about having 
the credential API back to barbican after it was a thing. I'm not sure if any 
work has been done to move the credential API in this direction. From a 
security perspective, I think it would make sense for keystone to back to 
barbican.

+1

And regarding the "inappropriate use of keystone," I'd agree... without this 
spec, keystone is entirely useless as any sort of alternative to Barbican:

  https://review.openstack.org/#/c/284950/

I suspect Barbican will forever be a much more mature choice for Magnum.


On Tue, Apr 12, 2016 at 2:43 PM, Hongbin Lu 
> wrote:
Hi all,

In short, some Magnum team members proposed to store TLS certificates in 
Keystone credential store. As Magnum PTL, I want to get agreements (or 
non-disagreement) from OpenStack community in general, Keystone community in 
particular, before approving the direction.

In details, Magnum leverages TLS to secure the API endpoint of 
kubernetes/docker swarm. The usage of TLS requires a secure store for storing 
TLS certificates. Currently, we leverage Barbican for this purpose, but we 
constantly received requests to decouple Magnum from Barbican (because users 
normally don't have Barbican installed in their clouds). Some Magnum team 
members proposed to leverage Keystone credential store as a Barbican 
alternative [1]. Therefore, I want to confirm what is Keystone team position 
for this proposal (I remembered someone from Keystone mentioned this is an 
inappropriate use of Keystone. Would I ask for further clarification?). Thanks 
in advance.

[1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Adrian Otto
That’s not what I was talking about here. I’m addressing the interest in a 
common compute API for the various types of compute (VM, BM, Container). Having 
a “containers” API for multiple COE’s is a different subject.

Adrian

On Apr 11, 2016, at 11:10 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

Sorry, I disagree.

Magnum team doesn’t have consensus to reject the idea of unifying APIs from 
different container technology. In contrast, the idea of unified Container APIs 
has been constantly proposed by different people in the past. I will try to 
allocate a session in design summit to discuss it as a team.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-11-16 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org<mailto:foundat...@lists.openstack.org>
Subject: Re: [OpenStack Foundation] [openstack-dev] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have limited similarities, so when you try to make a universal interface to all 
of them, you end up with a really complicated mess. It would be even worse if 
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s 
approach is to offer the upstream native API’s for the different container 
orchestration engines (COE), and compose Bays for them to run on that are built 
from the compute types that OpenStack supports. We do this by using different 
Heat orchestration templates (and conditional templates) to arrange a COE on 
the compute type of your choice. With that said, there are still gaps where not 
all storage or network drivers work with Ironic, and there are non-trivial 
security hurdles to clear to safely use Bays composed of libvirt-lxc instances 
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum, 
and if it does, create a bay with the flavor type specified for whatever 
compute type you want, and then use the native API for the COE you selected for 
that bay. Start your instance on the COE, just like you use Nova today. This 
way, you have low complexity in Trove, and you can scale both the number of 
instances of your data nodes (containers), and the infrastructure on which they 
run (Nova instances).

Regards,

Adrian



On Apr 11, 2016, at 8:47 AM, Amrith Kumar 
<amr...@tesora.com<mailto:amr...@tesora.com>> wrote:

Monty, Dims,

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty prototyping, and VM's are great for much more, there are databases

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Adrian Otto
Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have limited similarities, so when you try to make a universal interface to all 
of them, you end up with a really complicated mess. It would be even worse if 
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s 
approach is to offer the upstream native API’s for the different container 
orchestration engines (COE), and compose Bays for them to run on that are built 
from the compute types that OpenStack supports. We do this by using different 
Heat orchestration templates (and conditional templates) to arrange a COE on 
the compute type of your choice. With that said, there are still gaps where not 
all storage or network drivers work with Ironic, and there are non-trivial 
security hurdles to clear to safely use Bays composed of libvirt-lxc instances 
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum, 
and if it does, create a bay with the flavor type specified for whatever 
compute type you want, and then use the native API for the COE you selected for 
that bay. Start your instance on the COE, just like you use Nova today. This 
way, you have low complexity in Trove, and you can scale both the number of 
instances of your data nodes (containers), and the infrastructure on which they 
run (Nova instances).

Regards,

Adrian



On Apr 11, 2016, at 8:47 AM, Amrith Kumar 
> wrote:

Monty, Dims,

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty prototyping, and VM's are great for much more, there are databases that 
will in production only be meaningful on bare-metal.

Therefore, if there is a move towards offering a common API for VM's, 
bare-metal and containers, that would be huge.

Without such a mechanism, consuming containers in Trove adds considerable 
complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a working 
prototype of Trove leveraging Ironic, VM's, and nova-docker to provision 
databases is something I worked on a while ago, and have not revisited it since 
then (once the direction appeared to be Magnum for containers).

With all that said, I don't want to downplay the value in a container specific 
API. I'm merely observing that from the perspective of a consumer of computing 
services, a common abstraction is incredibly valuable.

Thanks,

-amrith

-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com]
Sent: Monday, April 11, 2016 11:31 AM
To: Allison Randal >; Davanum 
Srinivas
>; 

Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-04-08 Thread Adrian Otto

On Apr 8, 2016, at 3:15 PM, Hongbin Lu 
> wrote:

Hi team,
I would like to give an update for this thread. In the last team, we discussed 
several options to introduce Chronos to our mesos bay:
1.   Add Chronos to the mesos bay. With this option, the mesos bay will 
have two mesos frameworks by default (Marathon and Chronos).
2.   Add a configuration hook for users to configure additional mesos 
frameworks, such as Chronos. With this option, Magnum team doesn’t need to 
maintain extra framework configuration. However, users need to do it themselves.

This is my preference.

Adrian

3.   Create a dedicated bay type for Chronos. With this option, we separate 
Marathon and Chronos into two different bay types. As a result, each bay type 
becomes easier to maintain, but those two mesos framework cannot share 
resources (a key feature of mesos is to have different frameworks running on 
the same cluster to increase resource utilization).
Which option you prefer? Or you have other suggestions? Advices are welcome.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-28-16 12:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

Jay,

just keep in mind that Chronos can be run by Marathon.

---
Egor


From: Jay Lau >
To: OpenStack Development Mailing List (not for usage questions) 
>
Sent: Friday, March 25, 2016 7:01 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

Yes, that's exactly what I want to do, adding dcos cli and also add Chronos to 
Mesos Bay to make it can handle both long running services and batch jobs.

Thanks,

On Fri, Mar 25, 2016 at 5:25 PM, Michal Rostecki 
> wrote:

On 03/25/2016 07:57 AM, Jay Lau wrote:

Hi Magnum,

The current mesos bay only include mesos and marathon, it is better to
enhance the mesos bay have more components and finally enhance it to a
DCOS which focus on container service based on mesos.

For more detail, please refer to
https://docs.mesosphere.com/getting-started/installing/installing-enterprise-edition/

The mesosphere now has a template on AWS which can help customer deploy
a DCOS on AWS, it would be great if Magnum can also support it based on
OpenStack.

I filed a bp here
https://blueprints.launchpad.net/magnum/+spec/mesos-dcos , please show
your comments if any.

--
Thanks,

Jay Lau (Guangya Liu)

Sorry if I'm missing something, but isn't DCOS a closed source software?

However, the "DCOS cli"[1] seems to be working perfectly with Marathon and 
Mesos installed by any way if you configure it well. I think that the thing 
which can be done in Magnum is to make the experience with "DOCS" tools as easy 
as possible by using open source components from Mesosphere.

Cheers,
Michal

[1] https://github.com/mesosphere/dcos-cli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,
Jay Lau (Guangya Liu)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Adrian Otto
+1

On Mar 31, 2016, at 11:18 AM, Hongbin Lu 
> wrote:

Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His 
contribution started from about 10 months ago. Along the way, he implemented 
several important blueprints and fixed a lot of bugs. His contribution covers 
various aspects (i.e. APIs, conductor, unit/functional tests, all the COE 
templates, etc.), which shows that he has a good understanding of almost every 
pieces of the system. The feature set he contributed to is proven to be 
beneficial to the project. For example, the gate testing framework he heavily 
contributed to is what we rely on every days. His code reviews are also 
consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team. According 
to the OpenStack Governance process [1], we require a minimum of 4 +1 votes 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, Eli is not able to join the core team 
and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Adrian Otto
+1

On Mar 31, 2016, at 11:18 AM, Hongbin Lu 
> wrote:

Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His 
contribution started from about 10 months ago. Along the way, he implemented 
several important blueprints and fixed a lot of bugs. His contribution covers 
various aspects (i.e. APIs, conductor, unit/functional tests, all the COE 
templates, etc.), which shows that he has a good understanding of almost every 
pieces of the system. The feature set he contributed to is proven to be 
beneficial to the project. For example, the gate testing framework he heavily 
contributed to is what we rely on every days. His code reviews are also 
consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team. According 
to the OpenStack Governance process [1], we require a minimum of 4 +1 votes 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, Eli is not able to join the core team 
and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Generate atomic images using diskimage-builder

2016-03-29 Thread Adrian Otto
Steve,

I will defer to the experts in openstack-infra on this one. As long as the 
image works without modifications, then I think it would be fine to cache the 
upstream one. Practically speaking, I do anticipate a point at which we will 
want to adjust something in the image, and it will be nice to have a well 
defined point of customization in place for that in advance.

Adrian

On Mar 29, 2016, at 12:54 PM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:

Adrian,

Makes sense.  Do the images have to be built to be mirrored though?  Can't
they just be put on the mirror sites fro upstream?

Thanks
-steve

On 3/29/16, 11:02 AM, "Adrian Otto" 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:

Steve,

I¹m very interested in having an image locally cached in glance in each
of the clouds used by OpenStack infra. The local caching of the glance
images will produce much faster gate testing times. I don¹t care about
how the images are built, but we really do care about the performance
outcome.

Adrian

On Mar 29, 2016, at 10:38 AM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>>
wrote:

Yolanda,

That is a fantastic objective.  Matthieu asked why build our own images
if
the upstream images work and need no further customization?

Regards
-steve

On 3/29/16, 1:57 AM, "Yolanda Robla Mota" 
<yolanda.robla-m...@hpe.com<mailto:yolanda.robla-m...@hpe.com>>
wrote:

Hi
The idea is to build own images using diskimage-builder, rather than
downloading the image from external sources. By that way, the image can
live in our mirrors, and is built using the same pattern as other
images
used in OpenStack.
It also opens the door to customize the images, using custom trees, if
there is a need for it. Actually we rely on official tree for Fedora 23
Atomic (https://dl.fedoraproject.org/pub/fedora/linux/atomic/23/) as
default.

Best,
Yolanda

El 29/03/16 a las 10:17, Mathieu Velten escribió:
Hi,

We are using the official Fedora Atomic 23 images here (on Mitaka M1
however) and it seems to work fine with at least Kubernetes and Docker
Swarm.
Any reason to continue building specific Magnum image ?

Regards,

Mathieu

Le mercredi 23 mars 2016 à 12:09 +0100, Yolanda Robla Mota a écrit :
Hi
I wanted to start a discussion on how Fedora Atomic images are being
built. Currently the process for generating the atomic images used
on
Magnum is described here:
http://docs.openstack.org/developer/magnum/dev/build-atomic-image.htm
l.
The image needs to be built manually, uploaded to fedorapeople, and
then
consumed from there in the magnum tests.
I have been working on a feature to allow diskimage-builder to
generate
these images. The code that makes it possible is here:
https://review.openstack.org/287167
This will allow that magnum images are generated on infra, using
diskimage-builder element. This element also has the ability to
consume
any tree we need, so images can be customized on demand. I generated
one
image using this element, and uploaded to fedora people. The image
has
passed tests, and has been validated by several people.

So i'm raising that topic to decide what should be the next steps.
This
change to generate fedora-atomic images has not already landed into
diskimage-builder. But we have two options here:
- add this element to generic diskimage-builder elements, as i'm
doing now
- generate this element internally on magnum. So we can have a
directory
in magnum project, called "elements", and have the fedora-atomic
element
here. This will give us more control on the element behaviour, and
will
allow to update the element without waiting for external reviews.

Once the code for diskimage-builder has landed, another step can be
to
periodically generate images using a magnum job, and upload these
images
to OpenStack Infra mirrors. Currently the image is based on Fedora
F23,
docker-host tree. But different images can be generated if we need a
better option.

As soon as the images are available on internal infra mirrors, the
tests
can be changed, to consume these internals images. By this way the
tests
can be a bit faster (i know that the bottleneck is on the functional
testing, but if we reduce the download time it can help), and tests
can
be more reilable, because we will be removing an external dependency.

So i'd like to get more feedback on this topic, options and next
steps
to achieve the goals. Best



___
__
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Yolanda Robla Mota
Cloud Automation and Distribution Engineer
+34 605641639
yolanda.robla-m...@hpe.com<mailto:yolanda.robla-m...@hpe.com>



___

Re: [openstack-dev] [magnum] Generate atomic images using diskimage-builder

2016-03-29 Thread Adrian Otto
Steve,

I’m very interested in having an image locally cached in glance in each of the 
clouds used by OpenStack infra. The local caching of the glance images will 
produce much faster gate testing times. I don’t care about how the images are 
built, but we really do care about the performance outcome.

Adrian

> On Mar 29, 2016, at 10:38 AM, Steven Dake (stdake)  wrote:
> 
> Yolanda,
> 
> That is a fantastic objective.  Matthieu asked why build our own images if
> the upstream images work and need no further customization?
> 
> Regards
> -steve
> 
> On 3/29/16, 1:57 AM, "Yolanda Robla Mota" 
> wrote:
> 
>> Hi
>> The idea is to build own images using diskimage-builder, rather than
>> downloading the image from external sources. By that way, the image can
>> live in our mirrors, and is built using the same pattern as other images
>> used in OpenStack.
>> It also opens the door to customize the images, using custom trees, if
>> there is a need for it. Actually we rely on official tree for Fedora 23
>> Atomic (https://dl.fedoraproject.org/pub/fedora/linux/atomic/23/) as
>> default.
>> 
>> Best,
>> Yolanda
>> 
>> El 29/03/16 a las 10:17, Mathieu Velten escribió:
>>> Hi,
>>> 
>>> We are using the official Fedora Atomic 23 images here (on Mitaka M1
>>> however) and it seems to work fine with at least Kubernetes and Docker
>>> Swarm.
>>> Any reason to continue building specific Magnum image ?
>>> 
>>> Regards,
>>> 
>>> Mathieu
>>> 
>>> Le mercredi 23 mars 2016 à 12:09 +0100, Yolanda Robla Mota a écrit :
 Hi
 I wanted to start a discussion on how Fedora Atomic images are being
 built. Currently the process for generating the atomic images used
 on
 Magnum is described here:
 http://docs.openstack.org/developer/magnum/dev/build-atomic-image.htm
 l.
 The image needs to be built manually, uploaded to fedorapeople, and
 then
 consumed from there in the magnum tests.
 I have been working on a feature to allow diskimage-builder to
 generate
 these images. The code that makes it possible is here:
 https://review.openstack.org/287167
 This will allow that magnum images are generated on infra, using
 diskimage-builder element. This element also has the ability to
 consume
 any tree we need, so images can be customized on demand. I generated
 one
 image using this element, and uploaded to fedora people. The image
 has
 passed tests, and has been validated by several people.
 
 So i'm raising that topic to decide what should be the next steps.
 This
 change to generate fedora-atomic images has not already landed into
 diskimage-builder. But we have two options here:
 - add this element to generic diskimage-builder elements, as i'm
 doing now
 - generate this element internally on magnum. So we can have a
 directory
 in magnum project, called "elements", and have the fedora-atomic
 element
 here. This will give us more control on the element behaviour, and
 will
 allow to update the element without waiting for external reviews.
 
 Once the code for diskimage-builder has landed, another step can be
 to
 periodically generate images using a magnum job, and upload these
 images
 to OpenStack Infra mirrors. Currently the image is based on Fedora
 F23,
 docker-host tree. But different images can be generated if we need a
 better option.
 
 As soon as the images are available on internal infra mirrors, the
 tests
 can be changed, to consume these internals images. By this way the
 tests
 can be a bit faster (i know that the bottleneck is on the functional
 testing, but if we reduce the download time it can help), and tests
 can
 be more reilable, because we will be removing an external dependency.
 
 So i'd like to get more feedback on this topic, options and next
 steps
 to achieve the goals. Best
 
>>> 
>>> _
>>> _
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> -- 
>> Yolanda Robla Mota
>> Cloud Automation and Distribution Engineer
>> +34 605641639
>> yolanda.robla-m...@hpe.com
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [containers][horizon][magnum-ui] - Stable version for Liberty?

2016-03-24 Thread Adrian Otto
Marcos,

Great question. The current intent is to backport security fixes and critical 
bugs, and to focus on master for new feature development. Although we would 
love to expand scope to backport functionality, I’m not sure it’s realistic 
without an increased level of commitment from that group of contributors. With 
that said, I am willing to approve back porting of basic features to previous 
stable branches on an individual case basis.

Adrian

On Mar 24, 2016, at 6:55 AM, Marcos Fermin Lobo 
> wrote:

Hi all,

I have a question about magnum-ui plugin for Horizon. I see that there is a 
tarball for Stable/Liberty version, but it is very simple, just Index views, 
any "create" action.

But I see a lot of work in master branch for this project, but is not 
compatible with Horizon Liberty. My question to people in charge of this 
project is: When the code is stable, do you plan to backport all the 
functionality to Liberty version? or just go to Mitaka?

Thank you.

Regards,
Marcos.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-03-24 Thread Adrian Otto


> On Mar 24, 2016, at 7:48 AM, Hongbin Lu  wrote:
> 
> 
> 
>> -Original Message-
>> From: Assaf Muller [mailto:as...@redhat.com]
>> Sent: March-24-16 9:24 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 -
>> are weready?
>> 
>> On Thu, Mar 24, 2016 at 1:48 AM, Takashi Yamamoto
>>  wrote:
>>> On Thu, Mar 24, 2016 at 6:17 AM, Doug Wiegley
>>>  wrote:
 Migration script has been submitted, v1 is not going anywhere from
>> stable/liberty or stable/mitaka, so it’s about to disappear from master.
 
 I’m thinking in this order:
 
 - remove jenkins jobs
 - wait for heat to remove their jenkins jobs ([heat] added to this
 thread, so they see this coming before the job breaks)
>>> 
>>> magnum is relying on lbaasv1.  (with heat)
>> 
>> Is there anything blocking you from moving to v2?
> 
> A ticket was created for that: 
> https://blueprints.launchpad.net/magnum/+spec/migrate-to-lbaas-v2 . It will 
> be picked up by contributors once it is approved. Please give us sometimes to 
> finish the work.

Approved.

 - remove q-lbaas from devstack, and any references to lbaas v1 in
>> devstack-gate or infra defaults.
 - remove v1 code from neutron-lbaas
 
 Since newton is now open for commits, this process is going to get
>> started.
 
 Thanks,
 doug
 
 
 
> On Mar 8, 2016, at 11:36 AM, Eichberger, German
>>  wrote:
> 
> Yes, it’s Database only — though we changed the agent driver in the
>> DB from V1 to V2 — so if you bring up a V2 with that database it should
>> reschedule all your load balancers on the V2 agent driver.
> 
> German
> 
> 
> 
> 
>> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
>> 
>> So this looks like only a database migration, right?
>> 
>> -Original Message-
>> From: Eichberger, German [mailto:german.eichber...@hpe.com]
>> Sent: Tuesday, March 08, 2016 12:28 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 -
>> are weready?
>> 
>> Ok, for what it’s worth we have contributed our migration script:
>> https://review.openstack.org/#/c/289595/ — please look at this as
>> a
>> starting point and feel free to fix potential problems…
>> 
>> Thanks,
>> German
>> 
>> 
>> 
>> 
>> On 3/7/16, 11:00 AM, "Samuel Bercovici" 
>> wrote:
>> 
>>> As far as I recall, you can specify the VIP in creating the LB so
>> you will end up with same IPs.
>>> 
>>> -Original Message-
>>> From: Eichberger, German [mailto:german.eichber...@hpe.com]
>>> Sent: Monday, March 07, 2016 8:30 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 -
>> are weready?
>>> 
>>> Hi Sam,
>>> 
>>> So if you have some 3rd party hardware you only need to change
>> the
>>> database (your steps 1-5) since the 3rd party hardware will just
>>> keep load balancing…
>>> 
>>> Now for Kevin’s case with the namespace driver:
>>> You would need a 6th step to reschedule the loadbalancers with
>> the V2 namespace driver — which can be done.
>>> 
>>> If we want to migrate to Octavia or (from one LB provider to
>> another) it might be better to use the following steps:
>>> 
>>> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
>>> Health Monitors , Members) into some JSON format file(s) 2.
>> Delete LBaaS v1 3.
>>> Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON
>>> format file into some scripts which recreate the load balancers
>>> with your provider of choice —
>>> 
>>> 6. Run those scripts
>>> 
>>> The problem I see is that we will probably end up with different
>>> VIPs so the end user would need to change their IPs…
>>> 
>>> Thanks,
>>> German
>>> 
>>> 
>>> 
>>> On 3/6/16, 5:35 AM, "Samuel Bercovici" 
>> wrote:
>>> 
 As for a migration tool.
 Due to model changes and deployment changes between LBaaS v1 and
>> LBaaS v2, I am in favor for the following process:
 
 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
 Health Monitors , Members) into some JSON format file(s) 2.
>> Delete LBaaS v1 3.
 Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1
 back over LBaaS v2 (need to allow moving from falvor1-->flavor2,
 need to make room to some custom modification for mapping
>> between
 v1 and v2
 models)
 
 What do you think?
 

[openstack-dev] [magnum] Streamline adoption of Magnum

2016-03-22 Thread Adrian Otto
Team,

This thread is a continuation of a branch of the previous High Availability 
thread [1]. As the Magnum PTL, I’ve been aware of a number of different groups 
who have started using Magnum in recent months. For various reasons, there have 
been multiple requests for information about how to turn off the dependency on 
Barbican, which we use for secure storage of TLS certificates that are used to 
secure communications between various components of the software hosted on 
Magnum Bay resources. Examples of this are Docker Swarm, and Kubernetes, which 
we affectionately refer to as COEs (Container Orchestration Engines). The only 
alternative to Barbican currently offered in Magnum is a local file option, 
which is only intended to be used for testing, as the certificates are stored 
unencrypted on a local filesystem where the conductor runs, and when you use 
this option, you can’t scale beyond a single conductor.

Although our whole community agrees that using Barbican is the right long term 
solution for deployments of Magnum, we still wish to make the friction of 
adopting Magnum to be as low as possible without completely compromising all 
security best practices. Some ops teams are willing to adopt a new service, but 
not two. They only want to add Magnum and not Barbican. We think that once 
those operators become familiar with Magnum, adding Barbican will follow. In 
the mean time, we’d like to offer a Barbican alternative that allows Magnum to 
scale beyond one conductor, and allows for encrypted storage of TLC credentials 
needed for unattended bay operations. A blueprint [2] was recently proposed to 
address this. We discussed this in our team meeting today [3], where we used an 
etherpad [4] to collaborate on options that could be used as alternatives 
besides the ones offered today. This thread is not intended to answer how to 
make Barbican easier to adopt, but rather how to make Magnum easier to adopt 
while keeping Barbican as the default best-practice choice for certificate 
storage.

I want to highlight that the implementation of the spec referenced by Daneyon 
Hansen in his quoted response below was completed in the Liberty release 
timeframe, and communication between COE components is now secured using TLS. 
We are discussing the continued use of TLS for encrypted connections between 
COE components, but potentially using Keystone tokens for authentication 
between clients and COE’s rather than using TLS for both encryption and 
authentication. Further notes on this are available in the etherpad [4].

I ask that you please review the options under consideration, note your remarks 
in the etherpad [4], and continue discussion here as needed.

Thanks,

Adrian

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-March/089684.html
[2] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
[3] 
http://eavesdrop.openstack.org/meetings/containers/2016/containers.2016-03-22-16.01.html
[4] https://etherpad.openstack.org/p/magnum-barbican-alternative

On Mar 22, 2016, at 11:52 AM, Daneyon Hansen (danehans) 
> wrote:



From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, March 21, 2016 at 8:19 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] High Availability

Tim,

Thanks for your advice. I respect your point of view and we will definitely 
encourage our users to try Barbican if they see fits. However, for the sake of 
Magnum, I think we have to decouple from Barbican at current stage. The 
coupling of Magnum and Barbican will increase the size of the system by two (1 
project -> 2 project), which will significant increase the overall complexities.
· For developers, it incurs significant overheads on development, 
quality assurance, and maintenance.
· For operators, it doubles the amount of efforts of deploying and 
monitoring the system.
· For users, a large system is likely to be unstable and fragile which 
affects the user experience.
In my point of view, I would like to minimize the system we are going to ship, 
so that we can reduce the overheads of maintenance and provides a stable system 
to our users.

I noticed that there are several suggestions to “force” our users to install 
Barbican, which I would respectfully disagree. Magnum is a young project and we 
are struggling to increase the adoption rate. I think we need to be nice to our 
users, otherwise, they will choose our competitors (there are container service 
everywhere). Please understand that we are not a mature project, like Nova, who 
has thousands of users. We really don’t have the power to force our users to 

Re: [openstack-dev] [magnum] High Availability

2016-03-22 Thread Adrian Otto
Team,

Time to close down this thread and start a new one. I’m going to change the 
subject line, and start with a summary. Please restrict further discussion on 
this thread to the subject of High Availability.

Thanks,

Adrian

On Mar 22, 2016, at 11:52 AM, Daneyon Hansen (danehans) 
> wrote:



From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, March 21, 2016 at 8:19 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] High Availability

Tim,

Thanks for your advice. I respect your point of view and we will definitely 
encourage our users to try Barbican if they see fits. However, for the sake of 
Magnum, I think we have to decouple from Barbican at current stage. The 
coupling of Magnum and Barbican will increase the size of the system by two (1 
project -> 2 project), which will significant increase the overall complexities.
· For developers, it incurs significant overheads on development, 
quality assurance, and maintenance.
· For operators, it doubles the amount of efforts of deploying and 
monitoring the system.
· For users, a large system is likely to be unstable and fragile which 
affects the user experience.
In my point of view, I would like to minimize the system we are going to ship, 
so that we can reduce the overheads of maintenance and provides a stable system 
to our users.

I noticed that there are several suggestions to “force” our users to install 
Barbican, which I would respectfully disagree. Magnum is a young project and we 
are struggling to increase the adoption rate. I think we need to be nice to our 
users, otherwise, they will choose our competitors (there are container service 
everywhere). Please understand that we are not a mature project, like Nova, who 
has thousands of users. We really don’t have the power to force our users to do 
what they don’t like to do.

I also recognized there are several disagreements from the Barbican team. Per 
my understanding, most of the complaints are about the re-invention of Barbican 
equivalent functionality in Magnum. To address that, I am going to propose an 
idea to achieve the goal without duplicating Barbican. In particular, I suggest 
to add support for additional authentication system (Keystone in particular) 
for our Kubernetes bay (potentially for swarm/mesos). As a result, users can 
specify how to secure their bay’s API endpoint:
· TLS: This option requires Barbican to be installed for storing the 
TLS certificates.
· Keystone: This option doesn’t require Barbican. Users will use their 
OpenStack credentials to log into Kubernetes.

I believe this is a sensible option that addresses the original problem 
statement in [1]:

"Magnum currently controls Kubernetes API services using unauthenticated HTTP. 
If an attacker knows the api_address of a Kubernetes Bay, (s)he can control the 
cluster without any access control."

The [1] problem statement is authenticating the bay API endpoint, not 
encrypting it. With the option you propose, we can leave the existing 
tls-disabled attribute alone and continue supporting encryption. Using Keystone 
to authenticate the Kubernetes API already exists outside of Magnum in 
Hypernetes [2]. We will need to investigate support for the other coe types.

[1] https://github.com/openstack/magnum/blob/master/specs/tls-support-magnum.rst
[2] http://thenewstack.io/hypernetes-brings-multi-tenancy-microservices/



I am going to send another ML to describe the details. You are welcome to 
provide your inputs. Thanks.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: March-19-16 5:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Saturday 19 March 2016 at 04:52
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] High Availability

...
If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.


There is a risk that we use decisions made by other projects to justify how 
Magnum is implemented. Heat was created 3 years ago according to 
https://www.openstack.org/software/project-navigator/ and 

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Adrian Otto
Hongbin,

I tweaked the blueprint in accordance with this approach, and approved it for 
Newton:
https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store

I think this is something we can all agree on as a middle ground, If not, I’m 
open to revisiting the discussion.

Thanks,

Adrian

On Mar 17, 2016, at 6:13 PM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:

Hongbin,

One alternative we could discuss as an option for operators that have a good 
reason not to use Barbican, is to use Keystone.

Keystone credentials store: 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#credentials-v3-credentials

The contents are stored in plain text in the Keystone DB, so we would want to 
generate an encryption key per bay, encrypt the certificate and store it in 
keystone. We would then use the same key to decrypt it upon reading the key 
back. This might be an acceptable middle ground for clouds that will not or can 
not run Barbican. This should work for any OpenStack cloud since Grizzly. The 
total amount of code in Magnum would be small, as the API already exists. We 
would need a library function to encrypt and decrypt the data, and ideally a 
way to select different encryption algorithms in case one is judged weak at 
some point in the future, justifying the use of an alternate.

Adrian

On Mar 17, 2016, at 4:55 PM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:

Hongbin,

On Mar 17, 2016, at 2:25 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

Adrian,

I think we need a boarder set of inputs in this matter, so I moved the 
discussion from whiteboard back to here. Please check my replies inline.

I would like to get a clear problem statement written for this.
As I see it, the problem is that there is no safe place to put certificates in 
clouds that do not run Barbican.
It seems the solution is to make it easy to add Barbican such that it's 
included in the setup for Magnum.
No, the solution is to explore an non-Barbican solution to store certificates 
securely.

I am seeking more clarity about why a non-Barbican solution is desired. Why is 
there resistance to adopting both Magnum and Barbican together? I think the 
answer is that people think they can make Magnum work with really old clouds 
that were set up before Barbican was introduced. That expectation is simply not 
reasonable. If there were a way to easily add Barbican to older clouds, perhaps 
this reluctance would melt away.

Magnum should not be in the business of credential storage when there is an 
existing service focused on that need.

Is there an issue with running Barbican on older clouds?
Anyone can choose to use the builtin option with Magnum if hey don't have 
Barbican.
A known limitation of that approach is that certificates are not replicated.
I guess the *builtin* option you referred is simply placing the certificates to 
local file system. A few of us had concerns on this approach (In particular, 
Tom Cammann has gave -2 on the review [1]) because it cannot scale beyond a 
single conductor. Finally, we made a compromise to land this option and use it 
for testing/debugging only. In other words, this option is not for production. 
As a result, Barbican becomes the only option for production which is the root 
of the problem. It basically forces everyone to install Barbican in order to 
use Magnum.

[1] https://review.openstack.org/#/c/212395/

It's probably a bad idea to replicate them.
That's what Barbican is for. --adrian_otto
Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
agreed to have two phases of implementation and the statement was made by you 
[2].


#agreed Magnum will use Barbican for an initial implementation for certificate 
generation and secure storage/retrieval.  We will commit to a second phase of 
development to eliminating the hard requirement on Barbican with an alternate 
implementation that implements the functional equivalent implemented in Magnum, 
which may depend on libraries, but not Barbican.


[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

The context there is important. Barbican was considered for two purposes: (1) 
CA signing capability, and (2) certificate storage. My willingness to implement 
an alternative was based on our need to get a certificate generation and 
signing solution that actually worked, as Barbican did not work for that at the 
time. I have always viewed Barbican as a suitable solution for certificate 
storage, as that was what it was first designed for. Since then, we have 
implemented certificate generation and signing logic within a library that does 
not depend on Barbican, and we can use that safely in production 

[openstack-dev] [magnum] PTL Candidacy

2016-03-19 Thread Adrian Otto
I announce my candidacy [1] for, and respectfully respect your support to 
continue as your Magnum PTL.

Here are are my achievements and OpenStack experience and that make me the best 
choice for this role:

* Founder of the OpenStack Containers Team
* Established vision and specification for Magnum
* Served as PTL for Magnum since the first line of code was contributed in 
November 2014
* Successful addition of Magnum to the official OpenStack projects list on 
2015-03-24
* Led numerous mid cycle meetups as PTL
* 3 terms of experience as elected PTL for Solum
* Involved with OpenStack since Austin Design Summit in 2010

What background and skills help me to continue this role well:

* Over 20 years of experience in technical leadership positions
* Unmatched experience leading multi-organization collaborations
* Diplomacy skills for inclusion of numerous viewpoints, and ability to drive 
consensus and shared vision
* Considerable experience in public speaking, including two keynotes at 
OpenStack Summits, and numerous appearances at other events.
* Leadership of collaborative OpenStack design summit sessions
* Deep belief in Open Source, Open Development, Open Design, and Open Community
* I love OpenStack and I love containers, probably more than anyone else in the 
world in this combination.

I come from a unique perspective of working with a team that released the first 
OpenStack based container solution in any public cloud: Carina by Rackspace. 
The operational lessons learned from operating our cloud at scale are 
profoundly informative for the direction we should head in Magnum as a 
compelling solution for cloud operators who want something that will work not 
just in a lab, but at scale with real production workloads. I am proud of this 
team and our accomplishments, and hope to share our experience and insight by 
leading open source contributions in OpenStack.

Those of you who have seem me in action know that I excel in a collaborative 
environment. I encourage discussion, raise minority viewpoints for 
consideration, and steer respectfully from my depth of experience. I strive to 
be inclusive, and to grow our community because I believe that our diversity 
makes us strong.

What to expect in the Newton release cycle:

We will continue to focus on developing a compelling combination OpenStack 
infrastructure and Container Orchestration software. We aim to combine the very 
best of both of these complimentary worlds. This requires a valuable vertical 
integration of container management tools with OpenStack. Here are key focus 
areas that I believe are important for us to work on during our next release:

* Furthering Magnum's production readiness. More documentation, more security 
hardening, more operational focus.
* Making Magnum more modular and extensible to allow for more choice for cloud 
operators.
* Storage integration. Leverage Cinder volumes for use by containers, and 
explore shared filesystem capability.
* Networking integration. Further leverage Neutron through Kuryr, and 
demonstrate ways to integrate alternative options.

I look forward to your vote, and to continued success together.

Thanks,

Adrian Otto

[1] https://review.openstack.org/293729
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Adrian Otto
Hongbin,

One alternative we could discuss as an option for operators that have a good 
reason not to use Barbican, is to use Keystone.

Keystone credentials store: 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#credentials-v3-credentials

The contents are stored in plain text in the Keystone DB, so we would want to 
generate an encryption key per bay, encrypt the certificate and store it in 
keystone. We would then use the same key to decrypt it upon reading the key 
back. This might be an acceptable middle ground for clouds that will not or can 
not run Barbican. This should work for any OpenStack cloud since Grizzly. The 
total amount of code in Magnum would be small, as the API already exists. We 
would need a library function to encrypt and decrypt the data, and ideally a 
way to select different encryption algorithms in case one is judged weak at 
some point in the future, justifying the use of an alternate.

Adrian

> On Mar 17, 2016, at 4:55 PM, Adrian Otto <adrian.o...@rackspace.com> wrote:
> 
> Hongbin,
> 
>> On Mar 17, 2016, at 2:25 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
>> 
>> Adrian,
>> 
>> I think we need a boarder set of inputs in this matter, so I moved the 
>> discussion from whiteboard back to here. Please check my replies inline.
>> 
>>> I would like to get a clear problem statement written for this.
>>> As I see it, the problem is that there is no safe place to put certificates 
>>> in clouds that do not run Barbican.
>>> It seems the solution is to make it easy to add Barbican such that it's 
>>> included in the setup for Magnum.
>> No, the solution is to explore an non-Barbican solution to store 
>> certificates securely.
> 
> I am seeking more clarity about why a non-Barbican solution is desired. Why 
> is there resistance to adopting both Magnum and Barbican together? I think 
> the answer is that people think they can make Magnum work with really old 
> clouds that were set up before Barbican was introduced. That expectation is 
> simply not reasonable. If there were a way to easily add Barbican to older 
> clouds, perhaps this reluctance would melt away.
> 
>>> Magnum should not be in the business of credential storage when there is an 
>>> existing service focused on that need.
>>> 
>>> Is there an issue with running Barbican on older clouds?
>>> Anyone can choose to use the builtin option with Magnum if hey don't have 
>>> Barbican.
>>> A known limitation of that approach is that certificates are not replicated.
>> I guess the *builtin* option you referred is simply placing the certificates 
>> to local file system. A few of us had concerns on this approach (In 
>> particular, Tom Cammann has gave -2 on the review [1]) because it cannot 
>> scale beyond a single conductor. Finally, we made a compromise to land this 
>> option and use it for testing/debugging only. In other words, this option is 
>> not for production. As a result, Barbican becomes the only option for 
>> production which is the root of the problem. It basically forces everyone to 
>> install Barbican in order to use Magnum.
>> 
>> [1] https://review.openstack.org/#/c/212395/ 
>> 
>>> It's probably a bad idea to replicate them.
>>> That's what Barbican is for. --adrian_otto
>> Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
>> agreed to have two phases of implementation and the statement was made by 
>> you [2].
>> 
>> 
>> #agreed Magnum will use Barbican for an initial implementation for 
>> certificate generation and secure storage/retrieval.  We will commit to a 
>> second phase of development to eliminating the hard requirement on Barbican 
>> with an alternate implementation that implements the functional equivalent 
>> implemented in Magnum, which may depend on libraries, but not Barbican.
>> 
>> 
>> [2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html
> 
> The context there is important. Barbican was considered for two purposes: (1) 
> CA signing capability, and (2) certificate storage. My willingness to 
> implement an alternative was based on our need to get a certificate 
> generation and signing solution that actually worked, as Barbican did not 
> work for that at the time. I have always viewed Barbican as a suitable 
> solution for certificate storage, as that was what it was first designed for. 
> Since then, we have implemented certificate generation and signing logic 
> within a library t

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Adrian Otto
I have trouble understanding that blueprint. I will put some remarks on the 
whiteboard. Duplicating Barbican sounds like a mistake to me.

--
Adrian

> On Mar 17, 2016, at 12:01 PM, Hongbin Lu  wrote:
> 
> The problem of missing Barbican alternative implementation has been raised 
> several times by different people. IMO, this is a very serious issue that 
> will hurt Magnum adoption. I created a blueprint for that [1] and set the PTL 
> as approver. It will be picked up by a contributor once it is approved.
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store 
> 
> Best regards,
> Hongbin
> 
> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
> Sent: March-17-16 2:39 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hi.
> 
> We're on the way, the API is using haproxy load balancing in the same way all 
> openstack services do here - this part seems to work fine.
> 
> For the conductor we're stopped due to bay certificates - we don't currently 
> have barbican so local was the only option. To get them accessible on all 
> nodes we're considering two options:
> - store bay certs in a shared filesystem, meaning a new set of credentials in 
> the boxes (and a process to renew fs tokens)
> - deploy barbican (some bits of puppet missing we're sorting out)
> 
> More news next week.
> 
> Cheers,
> Ricardo
> 
>> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) 
>>  wrote:
>> All,
>> 
>> Does anyone have experience deploying Magnum in a highly-available fashion?
>> If so, I’m interested in learning from your experience. My biggest 
>> unknown is the Conductor service. Any insight you can provide is 
>> greatly appreciated.
>> 
>> Regards,
>> Daneyon Hansen
>> 
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Adrian Otto
Hongbin,

On Mar 17, 2016, at 2:25 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

Adrian,

I think we need a boarder set of inputs in this matter, so I moved the 
discussion from whiteboard back to here. Please check my replies inline.

I would like to get a clear problem statement written for this.
As I see it, the problem is that there is no safe place to put certificates in 
clouds that do not run Barbican.
It seems the solution is to make it easy to add Barbican such that it's 
included in the setup for Magnum.
No, the solution is to explore an non-Barbican solution to store certificates 
securely.

I am seeking more clarity about why a non-Barbican solution is desired. Why is 
there resistance to adopting both Magnum and Barbican together? I think the 
answer is that people think they can make Magnum work with really old clouds 
that were set up before Barbican was introduced. That expectation is simply not 
reasonable. If there were a way to easily add Barbican to older clouds, perhaps 
this reluctance would melt away.

Magnum should not be in the business of credential storage when there is an 
existing service focused on that need.

Is there an issue with running Barbican on older clouds?
Anyone can choose to use the builtin option with Magnum if hey don't have 
Barbican.
A known limitation of that approach is that certificates are not replicated.
I guess the *builtin* option you referred is simply placing the certificates to 
local file system. A few of us had concerns on this approach (In particular, 
Tom Cammann has gave -2 on the review [1]) because it cannot scale beyond a 
single conductor. Finally, we made a compromise to land this option and use it 
for testing/debugging only. In other words, this option is not for production. 
As a result, Barbican becomes the only option for production which is the root 
of the problem. It basically forces everyone to install Barbican in order to 
use Magnum.

[1] https://review.openstack.org/#/c/212395/

It's probably a bad idea to replicate them.
That's what Barbican is for. --adrian_otto
Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
agreed to have two phases of implementation and the statement was made by you 
[2].


#agreed Magnum will use Barbican for an initial implementation for certificate 
generation and secure storage/retrieval.  We will commit to a second phase of 
development to eliminating the hard requirement on Barbican with an alternate 
implementation that implements the functional equivalent implemented in Magnum, 
which may depend on libraries, but not Barbican.


[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

The context there is important. Barbican was considered for two purposes: (1) 
CA signing capability, and (2) certificate storage. My willingness to implement 
an alternative was based on our need to get a certificate generation and 
signing solution that actually worked, as Barbican did not work for that at the 
time. I have always viewed Barbican as a suitable solution for certificate 
storage, as that was what it was first designed for. Since then, we have 
implemented certificate generation and signing logic within a library that does 
not depend on Barbican, and we can use that safely in production use cases. 
What we don’t have built in is what Barbican is best at, secure storage for our 
certificates that will allow multi-conductor operation.

I am opposed to the idea that Magnum should re-implement Barbican for 
certificate storage just because operators are reluctant to adopt it. If we 
need to ship a Barbican instance along with each Magnum control plane, so be 
it, but I don’t see the value in re-inventing the wheel. I promised the 
OpenStack community that we were out to integrate with and enhance OpenStack 
not to replace it.

Now, with all that said, I do recognize that not all clouds are motivated to 
use all available security best practices. They may be operating in 
environments that they believe are already secure (because of a secure 
perimeter), and that it’s okay to run fundamentally insecure software within 
those environments. As misguided as this viewpoint may be, it’s common. My 
belief is that it’s best to offer the best practice by default, and only allow 
insecure operation when someone deliberately turns of fundamental security 
features.

With all this said, I also care about Magnum adoption as much as all of us, so 
I’d like us to think creatively about how to strike the right balance between 
re-implementing existing technology, and making that technology easily 
accessible.

Thanks,

Adrian


Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-17-16 4:32 PM
To: OpenStack Development Mailing List (n

Re: [openstack-dev] [magnum-ui] Proposed Core addition, and removal notice

2016-03-15 Thread Adrian Otto
Hi,

Voting has concluded. Welcome, Shu Muto to the magnum-UI core team! I will 
announce your new status at today's team meeting.

Thanks,

Adrian

> On Mar 14, 2016, at 5:40 PM, Shuu Mutou  wrote:
> 
> Hi team, 
> 
> Thank you very much for vote to me.
> I'm looking forward to work more with our peers.
> However, when is the end of this vote?
> 
> Shu Muto
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-14 Thread Adrian Otto
to go without impacting the rest of the team. The 
team as a whole would agree to develop all features for at least the reference 
OS.
Could we re-confirm that this is a team agreement? There is no harm to 
re-confirm it in the design summit/ML/team meeting. Frankly, it doesn’t seem to 
be.

Then individuals or companies who are passionate about an alternative OS can 
develop the features for that OS.

Corey

On Sat, Mar 5, 2016 at 12:30 AM Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:


From: Adrian Otto 
[mailto:adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>]
Sent: March-04-16 6:31 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Steve,

On Mar 4, 2016, at 2:41 PM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, March 4, 2016 at 12:48 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin,

To be clear, this pursuit is not about what OS options cloud operators can 
select. We will be offering a method of choice. It has to do with what we plan 
to build comprehensive testing for,
This is easy. Once we build comprehensive tests for the first OS, just re-run 
it for other OS(s).

and the implications that has on our pace of feature development. My guidance 
here is that we resist the temptation to create a system with more permutations 
than we can possibly support. The relation between bay node OS, Heat Template, 
Heat Template parameters, COE, and COE dependencies (could-init, docker, 
flannel, etcd, etc.) are multiplicative in nature. From the mid cycle, it was 
clear to me that:

1) We want to test at least one OS per COE from end-to-end with comprehensive 
functional tests.
2) We want to offer clear and precise integration points to allow cloud 
operators to substitute their own OS in place of whatever one is the default 
for the given COE.

A COE shouldn’t have a default necessarily that locks out other defaults.  
Magnum devs are the experts in how these systems operate, and as such need to 
take on the responsibility of the implementation for multi-os support.

3) We want to control the total number of configuration permutations to 
simplify our efforts as a project. We agreed that gate testing all possible 
permutations is intractable.

I disagree with this point, but I don't have the bandwidth available to prove 
it ;)

That’s exactly my point. It takes a chunk of human bandwidth to carry that 
responsibility. If we had a system engineer assigned from each of the various 
upstream OS distros working with Magnum, this would not be a big deal. 
Expecting our current contributors to support a variety of OS variants is not 
realistic.
You have my promise to support an additional OS for 1 or 2 popular COEs.

Change velocity among all the components we rely on has been very high. We see 
some of our best contributors frequently sidetracked in the details of the 
distros releasing versions of code that won’t work with ours. We want to 
upgrade a component to add a new feature, but struggle to because the new 
release of the distro that offers that component is otherwise incompatible. 
Multiply this by more distros, and we expect a real problem.
At Magnum upstream, the overhead doesn’t seem to come from the OS. Perhaps, 
that is specific to your downstream?

There is no harm if you have 30 gates running the various combinations.  
Infrastructure can handle the load.  Whether devs have the cycles to make a 
fully bulletproof gate is the question I think you answered with the word 
intractable.

Actually, our existing gate tests are really stressing out our CI infra. At 
least one of the new infrastructure providers that replaced HP have equipment 
that runs considerably slower. For example, our swam functional gate now 
frequently fails because it can’t finish before the allowed time limit of 2 
hours where it could finish substantially faster before. If we expanded the 
workload considerably, we might quickly work to the detriment of other projects 
by perpetually clogging the CI pipelines. We want to be a good citizen of the 
openstack CI community. Testing configuration of third party software should be 
done with third party CI setups. That’s one of the reasons those exist. 
Ideally, each would be maintained by those who have a strategic (commercial?) 
interest in support for that particular OS.

I can t

Re: [openstack-dev] Branch miss-match between server and client in Kilo

2016-03-10 Thread Adrian Otto
Hi there Janki. Thanks for catching that. I think we can address this by 
creating a branch for the client that aligns with kilo. I’ve triaged the magnum 
bug on this, and I’m willing to help drive it to resolution together.

Regards,

Adrian

On Mar 9, 2016, at 8:16 PM, Janki Chhatbar 
> wrote:

Hi All

Greetings for the day!

I have noticed that while installing OpenStack Kilo using DevStack, the server 
components cloned are stable/kilo whereas the client components cloned are 
master. This leads to errors in installation or commands miss-match. For eg.

In tacker,
tacker git repo is stable/kilo which points to incorrect git repo URL. I have 
filled a  bug and proposed a patch for this 
(https://bugs.launchpad.net/tacker/+bug/1555130)

In Magnum,
magnum stable/kilo clones python-magnumclient master which leads to command 
mismatch (https://bugs.launchpad.net/magnum/+bug/1509273).


  1.  Does this affect all other services?
  2.  Does this mean that the branch needs to be changed for all the service's 
clients? The change will be in /devstack/lib/{service} file in "GITBRANCH" 
variable.

 If changes are required, I am willing to work on those.

Thanking you

Janki Chhatbar
OpenStack | SDN | Docker
(+91) 9409239106
simplyexplainedblog.wordpress.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum-ui] Proposed Core addition, and removal notice

2016-03-04 Thread Adrian Otto
Magnum UI Cores,

I propose the following changes to the magnum-ui core group [1]:

+ Shu Muto
- Dims (Davanum Srinivas), by request - justified by reduced activity level.

Please respond with your +1 votes to approve this change or -1 votes to oppose.

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][magnum-ui] Liaisons for I18n

2016-03-04 Thread Adrian Otto
Kato,

I have confirmed with Shu Muto, who will be assuming our I18n Liaison role for 
Magnum until further notice. Thanks for raising this important request.

Regards,

Adrian

> On Mar 3, 2016, at 6:53 AM, KATO Tomoyuki  wrote:
> 
> I added Magnum to the list... Feel free to add your name and IRC nick, Shu.
> 
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n
> 
>> One thing to note.
>> 
>> The role of i18n liaison is not to keep it well translated.
>> The main role is in a project side,
>> for example, to encourage i18n related reviews and fixes, or
>> to suggest what kind of coding is recommended from i18n point of view.
> 
> Yep, that is a reason why a core reviewer is preferred for liaison.
> We sometimes have various requirements:
> word ordering (block trans), n-plural form, and so on.
> Some of them may not be important for Japanese.
> 
> Regards,
> KATO Tomoyuki
> 
>> 
>> Akihiro
>> 
>> 2016-03-02 12:17 GMT+09:00 Shuu Mutou :
>>> Hi Hongbin, Yuanying and team,
>>> 
>>> Thank you for your recommendation.
>>> I'm keeping 100% of EN to JP translation of Magnum-UI everyday.
>>> I'll do my best, if I become a liaison.
>>> 
>>> Since translation has became another point of review for Magnum-UI, I hope 
>>> that members translate Magnum-UI into your native language.
>>> 
>>> Best regards,
>>> Shu Muto
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Adrian Otto
Steve,

On Mar 4, 2016, at 2:41 PM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, March 4, 2016 at 12:48 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin,

To be clear, this pursuit is not about what OS options cloud operators can 
select. We will be offering a method of choice. It has to do with what we plan 
to build comprehensive testing for, and the implications that has on our pace 
of feature development. My guidance here is that we resist the temptation to 
create a system with more permutations than we can possibly support. The 
relation between bay node OS, Heat Template, Heat Template parameters, COE, and 
COE dependencies (could-init, docker, flannel, etcd, etc.) are multiplicative 
in nature. From the mid cycle, it was clear to me that:

1) We want to test at least one OS per COE from end-to-end with comprehensive 
functional tests.
2) We want to offer clear and precise integration points to allow cloud 
operators to substitute their own OS in place of whatever one is the default 
for the given COE.

A COE shouldn’t have a default necessarily that locks out other defaults.  
Magnum devs are the experts in how these systems operate, and as such need to 
take on the responsibility of the implementation for multi-os support.

3) We want to control the total number of configuration permutations to 
simplify our efforts as a project. We agreed that gate testing all possible 
permutations is intractable.

I disagree with this point, but I don't have the bandwidth available to prove 
it ;)

That’s exactly my point. It takes a chunk of human bandwidth to carry that 
responsibility. If we had a system engineer assigned from each of the various 
upstream OS distros working with Magnum, this would not be a big deal. 
Expecting our current contributors to support a variety of OS variants is not 
realistic. Change velocity among all the components we rely on has been very 
high. We see some of our best contributors frequently sidetracked in the 
details of the distros releasing versions of code that won’t work with ours. We 
want to upgrade a component to add a new feature, but struggle to because the 
new release of the distro that offers that component is otherwise incompatible. 
Multiply this by more distros, and we expect a real problem.

There is no harm if you have 30 gates running the various combinations.  
Infrastructure can handle the load.  Whether devs have the cycles to make a 
fully bulletproof gate is the question I think you answered with the word 
intractable.

Actually, our existing gate tests are really stressing out our CI infra. At 
least one of the new infrastructure providers that replaced HP have equipment 
that runs considerably slower. For example, our swam functional gate now 
frequently fails because it can’t finish before the allowed time limit of 2 
hours where it could finish substantially faster before. If we expanded the 
workload considerably, we might quickly work to the detriment of other projects 
by perpetually clogging the CI pipelines. We want to be a good citizen of the 
openstack CI community. Testing configuration of third party software should be 
done with third party CI setups. That’s one of the reasons those exist. 
Ideally, each would be maintained by those who have a strategic (commercial?) 
interest in support for that particular OS.

I can tell you in Kolla we spend a lot of cycles just getting basic gating  
going of building containers and then deploying them.  We have even made 
inroads into testing the deployment.  We do CentOS, Ubuntu, and soon Oracle 
Linux, for both source and binary and build and deploy.  Lots of gates and if 
they aren't green we know the patch is wrong.

Remember that COE’s are tested on nova instances within heat stacks. Starting 
lots of nova instances within devstack in the gates is problematic. We are 
looking into using a libvirt-lxc instance type from nova instead of a 
libvirt-kvm instance to help alleviate this. Until then, limiting the scope of 
our gate tests is appropriate. We will continue our efforts to make them 
reasonably efficient.

Thanks,

Adrian


Regards
-steve


Note that it will take a thoughtful approach (subject to discussion) to balance 
these interests. Please take a moment to review the interest above. Do you or 
others disagree with these? If so, why?

Adrian

On Mar 4, 2016, at 9:09 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com&

Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Adrian Otto
Hongbin,

To be clear, this pursuit is not about what OS options cloud operators can 
select. We will be offering a method of choice. It has to do with what we plan 
to build comprehensive testing for, and the implications that has on our pace 
of feature development. My guidance here is that we resist the temptation to 
create a system with more permutations than we can possibly support. The 
relation between bay node OS, Heat Template, Heat Template parameters, COE, and 
COE dependencies (could-init, docker, flannel, etcd, etc.) are multiplicative 
in nature. From the mid cycle, it was clear to me that:

1) We want to test at least one OS per COE from end-to-end with comprehensive 
functional tests.
2) We want to offer clear and precise integration points to allow cloud 
operators to substitute their own OS in place of whatever one is the default 
for the given COE.
3) We want to control the total number of configuration permutations to 
simplify our efforts as a project. We agreed that gate testing all possible 
permutations is intractable.

Note that it will take a thoughtful approach (subject to discussion) to balance 
these interests. Please take a moment to review the interest above. Do you or 
others disagree with these? If so, why?

Adrian

On Mar 4, 2016, at 9:09 AM, Hongbin Lu 
> wrote:

I don’t think there is any consensus on supporting single distro. There are 
multiple disagreements on this thread, including several senior team members 
and a project co-founder. This topic should be re-discussed (possibly at the 
design summit).

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-04-16 11:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

I don't think anyone is saying that code should somehow block support for 
multiple distros. The discussion at midcycle was about what the we should gate 
on and ensure feature parity for as a team. Ideally, we'd like to get support 
for every distro, I think, but no one wants to have that many gates. Instead, 
the consensus at the midcycle was to have 1 reference distro for each COE, gate 
on those and develop features there, and then have any other distros be 
maintained by those in the community that are passionate about them.

The issue also isn't about how difficult or not it is. The problem we want to 
avoid is spending precious time guaranteeing that new features and bug fixes 
make it through multiple distros.

Corey

On Fri, Mar 4, 2016 at 11:18 AM Steven Dake (stdake) 
> wrote:
My position on this is simple.

Operators are used to using specific distros because that is what they used in 
the 90s,and the 00s, and the 10s.  Yes, 25 years of using a distro, and you 
learn it inside and out.  This means you don't want to relearn a new distro, 
especially if your an RPM user going to DEB or a DEB user going to RPM.  These 
are non-starter options for operators, and as a result, mean that distro choice 
is a must.  Since CoreOS is a new OS in the marketplace, it may make sense to 
consider placing it in "third" position in terms of support.

Besides that problem, various distribution companies will only support distros 
running in Vms if it matches the host kernel, which makes total sense to me.  
This means on an Ubuntu host if I want support I need to run Ubuntu vms, on a 
RHEL host I want to run RHEL vms, because, hey, I want my issues supported.

For these reasons and these reasons alone, there is no good rationale to remove 
multi-distro support  from Magnum.  All I've heard in this thread so far is 
"its too hard".  Its not too hard, especially with Heat conditionals making 
their way into Mitaka.

Regards
-steve

From: Hongbin Lu >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Monday, February 29, 2016 at 9:40 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [magnum] Discussion of supporting single/multiple OS 
distro

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
From the midcycle, we decided we weren't going to continue to support 2 
different versions of the k8s template. Instead, we were going to maintain the 
Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
don't think we should continue to develop features for coreos k8s if that is 
true.
In addition, I don't think 

Re: [openstack-dev] [magnum][magnum-ui] Liaisons for I18n

2016-03-03 Thread Adrian Otto
Yuanying,

I would be happy to appoint Shu Muto for that role. I’ll check in with him 
about that.

Thanks,

Adrian

On Mar 1, 2016, at 6:11 PM, 大塚元央 
> wrote:

Hi team,

Shu Muto is interested in to became liaisons  from magnum-ui.
He put great effort into translating English to Japanease in magnum-ui and 
horizon.
I recommend him to be liaison.

Thanks
-yuanying

2016年2月29日(月) 23:56 Hongbin Lu 
>:
Hi team,

FYI, I18n team needs liaisons from magnum-ui. Please contact the i18n team if 
you interest in this role.

Best regards,
Hongbin

From: Ying Chun Guo [mailto:guoyi...@cn.ibm.com]
Sent: February-29-16 3:48 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [all][i18n] Liaisons for I18n

Hello,

Mitaka translation will start soon, from this week.
In Mitaka translation, IBM full time translators will join the
translation team and work with community translators.
With their help, I18n team is able to cover more projects.
So I need liaisons from dev projects who can help I18n team to work
compatibly with development team in the release cycle.

I especially need liaisons in below projects, which are in Mitaka translation 
plan:
nova, glance, keystone, cinder, swift, neutron, heat, horizon, ceilometer.

I also need liaisons from Horizon plugin projects, which are ready in 
translation website:
trove-dashboard, sahara-dashboard,designate-dasbhard, magnum-ui,
monasca-ui, murano-dashboard and senlin-dashboard.
I need liaisons tell us whether they are ready for translation from project 
view.

As to other projects, liaisons are welcomed too.

Here are the descriptions of I18n liaisons:
- The liaison should be a core reviewer for the project and understand the i18n 
status of this project.
- The liaison should understand project release schedule very well.
- The liaison should notify I18n team happens of important moments in the 
project release in time.
For example, happen of soft string freeze, happen of hard string freeze, and 
happen of RC1 cutting.
- The liaison should take care of translation patches to the project, and make 
sure the patches are
successfully merged to the final release version. When the translation patch is 
failed, the liaison
should notify I18n team.

If you are interested to be a liaison and help translators,
input your information here: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n .

Thank you for your support.
Best regards
Ying Chun Guo (Daisy)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-01 Thread Adrian Otto
This issue involves what I refer to as "OS religion" operators have this WRT 
bay nodes, but users don't. I suppose this is a key reason why OpenStack does 
not have any concept of supported OS images today. Where I can see the value in 
offering various choices in Magnum, maintaining a reference implementation of 
an OS image has shown that it requires non-trivial resources, and expanding 
that to several will certainly require more. The question really comes down to 
the importance of this particular choice as a development team focus. Is it 
more important than a compelling network or storage integration with OpenStack 
services? I doubt it.

We all agree there should be a way to use an alternate OS image with Magnum. 
That has been our intent from the start. We are not discussing removing that 
option. However, rather than having multiple OS images the Magnum team 
maintains, maybe we could clearly articulate how to plug in to Magnum, and set 
up a third party CI, and allow various OS vendors to participate to make their 
options work with those requirements. If this approach works, then it may even 
reduce the need for a reference implementation at all if multiple upstream 
options result.

--
Adrian

On Mar 1, 2016, at 12:28 AM, Guz Egor 
<guz_e...@yahoo.com<mailto:guz_e...@yahoo.com>> wrote:

Adrian,

I disagree, host OS is very important for operators because of integration with 
all internal tools/repos/etc.

I think it make sense to limit OS support in Magnum main source. But not sure 
that Fedora Atomic is right choice,
first of all there is no documentation about it and I don't think it's 
used/tested a lot by Docker/Kub/Mesos community.
It make sense to go with Ubuntu (I believe it's still most adopted platform in 
all three COEs and OpenStack deployments)
and CoreOS (is highly adopted/tested in Kub community and Mesosphere DCOS uses 
it as well).

We can implement CoreOS support as driver and users can use it as reference 
implementation.

---
Egor

____
From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Sent: Monday, February 29, 2016 10:36 AM
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Consider this: Which OS runs on the bay nodes is not important to end users. 
What matters to users is the environments their containers execute in, which 
has only one thing in common with the bay node OS: the kernel. The linux 
syscall interface is stable enough that the various linux distributions can all 
run concurrently in neighboring containers sharing same kernel. There is really 
no material reason why the bay OS choice must match what distro the container 
is based on. Although I’m persuaded by Hongbin’s concern to mitigate risk of 
future changes WRT whatever OS distro is the prevailing one for bay nodes, 
there are a few items of concern about duality I’d like to zero in on:

1) Participation from Magnum contributors to support the CoreOS specific 
template features has been weak in recent months. By comparison, participation 
relating to Fedora/Atomic have been much stronger.

2) Properly testing multiple bay node OS distros (would) significantly increase 
the run time and complexity of our functional tests.

3) Having support for multiple bay node OS choices requires more extensive 
documentation, and more comprehensive troubleshooting details.

If we proceed with just one supported disto for bay nodes, and offer 
extensibility points to allow alternates to be used in place of it, we should 
be able to address the risk concern of the chosen distro by selecting an 
alternate when that change is needed, by using those extensibility points. 
These include the ability to specify your own bay image, and the ability to use 
your own associated Heat template.

I see value in risk mitigation, it may make sense to simplify in the short term 
and address that need when it becomes necessary. My point of view might be 
different if we had contributors willing and ready to address the variety of 
drawbacks that accompany the strategy of supporting multiple bay node OS 
choices. In absence of such a community interest, my preference is to simplify 
to increase our velocity. This seems to me to be a relatively easy way to 
reduce complexity around heat template versioning. What do you think?

Thanks,

Adrian

On Feb 29, 2016, at 8:40 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
>From the midcycle, we decided we weren't going to c

Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-02-29 Thread Adrian Otto
Consider this: Which OS runs on the bay nodes is not important to end users. 
What matters to users is the environments their containers execute in, which 
has only one thing in common with the bay node OS: the kernel. The linux 
syscall interface is stable enough that the various linux distributions can all 
run concurrently in neighboring containers sharing same kernel. There is really 
no material reason why the bay OS choice must match what distro the container 
is based on. Although I’m persuaded by Hongbin’s concern to mitigate risk of 
future changes WRT whatever OS distro is the prevailing one for bay nodes, 
there are a few items of concern about duality I’d like to zero in on:

1) Participation from Magnum contributors to support the CoreOS specific 
template features has been weak in recent months. By comparison, participation 
relating to Fedora/Atomic have been much stronger.

2) Properly testing multiple bay node OS distros (would) significantly increase 
the run time and complexity of our functional tests.

3) Having support for multiple bay node OS choices requires more extensive 
documentation, and more comprehensive troubleshooting details.

If we proceed with just one supported disto for bay nodes, and offer 
extensibility points to allow alternates to be used in place of it, we should 
be able to address the risk concern of the chosen distro by selecting an 
alternate when that change is needed, by using those extensibility points. 
These include the ability to specify your own bay image, and the ability to use 
your own associated Heat template.

I see value in risk mitigation, it may make sense to simplify in the short term 
and address that need when it becomes necessary. My point of view might be 
different if we had contributors willing and ready to address the variety of 
drawbacks that accompany the strategy of supporting multiple bay node OS 
choices. In absence of such a community interest, my preference is to simplify 
to increase our velocity. This seems to me to be a relatively easy way to 
reduce complexity around heat template versioning. What do you think?

Thanks,

Adrian

On Feb 29, 2016, at 8:40 AM, Hongbin Lu 
> wrote:

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
From the midcycle, we decided we weren't going to continue to support 2 
different versions of the k8s template. Instead, we were going to maintain the 
Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
don't think we should continue to develop features for coreos k8s if that is 
true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was that instead of trying to support every 
distro and every version for every coe, the magnum tree would only have support 
for 1 version of 1 distro for each of the 3 COEs (swarm/docker/mesos). Since we 
already are going to support Atomic for swarm, removing coreos and keeping 
Atomic for kubernetes was the favored choice.

Hongbin Lu
Strongly disagree. It is a huge risk to support a single distro. The selected 
distro could die in the future. Who knows. Why make Magnum take this huge risk? 
Again, the decision of supporting single distro is a very big decision. Please 
bring it up to the team and have it discuss thoughtfully before making any 
decision. Also, Magnum doesn't have to support every distro and every version 
for every coe, but should support *more than one* popular distro for some COEs 
(especially for the popular COEs).

Corey O'Brien
The discussion at the midcycle started from the idea of adding support for RHEL 
and CentOS. We all discussed and decided that we wouldn't try to support 
everything in tree. Magnum would provide support in-tree for 1 per COE and the 
COE driver interface would allow others to add support for their preferred 
distro out of tree.

Hongbin Lu
I agreed the part that "we wouldn't try to support everything in tree". That 
doesn't imply the decision to support single distro. Again, support single 
distro is a huge risk. Why make Magnum take this huge risk?

[1] https://review.openstack.org/#/c/277284/

Best regards,
Hongbin
__
OpenStack Development Mailing 

Re: [openstack-dev] [magnum] containers across availability zones

2016-02-24 Thread Adrian Otto
Ricardo,

The blueprint is approved, thanks!

Adrian

> On Feb 24, 2016, at 2:53 PM, Ricardo Rocha <rocha.po...@gmail.com> wrote:
> 
> Thanks, done.
> 
> https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
> 
> We might have something already to expose the labels in the docker
> daemon config.
> 
> On Wed, Feb 24, 2016 at 6:01 PM, Vilobh Meshram
> <vilobhmeshram.openst...@gmail.com> wrote:
>> +1 from me too for the idea. Please file a blueprint. Seems feasible and
>> useful.
>> 
>> -Vilobh
>> 
>> 
>> On Tue, Feb 23, 2016 at 7:25 PM, Adrian Otto <adrian.o...@rackspace.com>
>> wrote:
>>> 
>>> Ricardo,
>>> 
>>> Yes, that approach would work. I don’t see any harm in automatically
>>> adding tags to the docker daemon on the bay nodes as part of the swarm heat
>>> template. That would allow the filter selection you described.
>>> 
>>> Adrian
>>> 
>>>> On Feb 23, 2016, at 4:11 PM, Ricardo Rocha <rocha.po...@gmail.com>
>>>> wrote:
>>>> 
>>>> Hi.
>>>> 
>>>> Has anyone looked into having magnum bay nodes deployed in different
>>>> availability zones? The goal would be to have multiple instances of a
>>>> container running on nodes across multiple AZs.
>>>> 
>>>> Looking at docker swarm this could be achieved using (for example)
>>>> affinity filters based on labels. Something like:
>>>> 
>>>> docker run -it -d -p 80:80 --label nova.availability-zone=my-zone-a
>>>> nginx
>>>> https://docs.docker.com/swarm/scheduler/filter/#use-an-affinity-filter
>>>> 
>>>> We can do this if we change the templates/config scripts to add to the
>>>> docker daemon params some labels exposing availability zone or other
>>>> metadata (taken from the nova metadata).
>>>> 
>>>> https://docs.docker.com/engine/userguide/labels-custom-metadata/#daemon-labels
>>>> 
>>>> It's a bit less clear how we would get heat to launch nodes across
>>>> availability zones using ResourceGroup(s), but there are other heat
>>>> resources that support it (i'm sure this can be done).
>>>> 
>>>> Does this make sense? Any thoughts or alternatives?
>>>> 
>>>> If it makes sense i'm happy to submit a blueprint.
>>>> 
>>>> Cheers,
>>>> Ricardo
>>>> 
>>>> 
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] containers across availability zones

2016-02-23 Thread Adrian Otto
Ricardo,

Yes, that approach would work. I don’t see any harm in automatically adding 
tags to the docker daemon on the bay nodes as part of the swarm heat template. 
That would allow the filter selection you described.

Adrian

> On Feb 23, 2016, at 4:11 PM, Ricardo Rocha  wrote:
> 
> Hi.
> 
> Has anyone looked into having magnum bay nodes deployed in different
> availability zones? The goal would be to have multiple instances of a
> container running on nodes across multiple AZs.
> 
> Looking at docker swarm this could be achieved using (for example)
> affinity filters based on labels. Something like:
> 
> docker run -it -d -p 80:80 --label nova.availability-zone=my-zone-a nginx
> https://docs.docker.com/swarm/scheduler/filter/#use-an-affinity-filter
> 
> We can do this if we change the templates/config scripts to add to the
> docker daemon params some labels exposing availability zone or other
> metadata (taken from the nova metadata).
> https://docs.docker.com/engine/userguide/labels-custom-metadata/#daemon-labels
> 
> It's a bit less clear how we would get heat to launch nodes across
> availability zones using ResourceGroup(s), but there are other heat
> resources that support it (i'm sure this can be done).
> 
> Does this make sense? Any thoughts or alternatives?
> 
> If it makes sense i'm happy to submit a blueprint.
> 
> Cheers,
>  Ricardo
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-02-02 Thread Adrian Otto
Thanks everyone for your votes. Welcome Ton and Egor to the core team!

Regards,

Adrian

> On Feb 1, 2016, at 7:58 AM, Adrian Otto <adrian.o...@rackspace.com> wrote:
> 
> Magnum Core Team,
> 
> I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core Reviewers. 
> Please respond with your votes.
> 
> Thanks,
> 
> Adrian Otto


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum]Remove node object from Magnum

2016-02-01 Thread Adrian Otto
Agreed.

> On Jan 31, 2016, at 10:46 PM, 王华  wrote:
> 
> Hi all,
> 
> I want to remove node object from Magnum. The node object represents either a 
> bare metal or virtual machine node that is provisioned with an OS to run the 
> containers, or alternatively,
> run kubernetes. Magnum use Heat to deploy the nodes, so it is unnecessary to 
> maintain node object in Magnum. Heat can do the work for us. The code about 
> node object is useless now, so let's remove it from Magnum.
> 
> Regards,
> Wanghua
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] New Core Reviewers

2016-02-01 Thread Adrian Otto
Magnum Core Team,

I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core Reviewers. 
Please respond with your votes.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Planning Magnum Midcycle

2016-01-21 Thread Adrian Otto
Team,

We have selected Feb 18-19 for the Midcycle, and will be hosted by HPE. Please 
save the date. The exact location is forthcoming, and is expected to be 
Sunnyvale.

Thanks,

Adrian

> On Jan 11, 2016, at 11:29 AM, Adrian Otto <adrian.o...@rackspace.com> wrote:
> 
> Team,
> 
> We are planning a mid cycle meetup for the Magnum team to be held in the San 
> Francisco Bay area. If you would like to attend, please take a moment to 
> respond to this poll to select the date:
> 
> http://doodle.com/poll/k8iidtamnkwqe3hd
> 
> Thanks,
> 
> Adrian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Planning Magnum Midcycle

2016-01-11 Thread Adrian Otto
Team,

We are planning a mid cycle meetup for the Magnum team to be held in the San 
Francisco Bay area. If you would like to attend, please take a moment to 
respond to this poll to select the date:

http://doodle.com/poll/k8iidtamnkwqe3hd

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Temporarily remove swarm func test from gate

2016-01-07 Thread Adrian Otto
Hongbin,

I’m not aware of any viable options besides using a nonvoting gate job. Are 
there other alternatives to consider? If not, let’s proceed with that approach.

Adrian

> On Jan 7, 2016, at 3:34 PM, Hongbin Lu  wrote:
> 
> Clark,
> 
> That is true. The check pipeline must pass in order to enter the gate 
> pipeline. Here is the problem we are facing. A patch that was able to pass 
> the check pipeline is blocked in gate pipeline, due to the instability of the 
> test. The removal of unstable test from gate pipeline aims to unblock the 
> patches that already passed the check.
> 
> An alternative is to remove the unstable test from check pipeline as well or 
> mark it as non-voting test. If that is what the team prefers, I will adjust 
> the review accordingly.
> 
> Best regards,
> Honbgin
> 
> -Original Message-
> From: Clark Boylan [mailto:cboy...@sapwetik.org] 
> Sent: January-07-16 6:04 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
> gate
> 
> On Thu, Jan 7, 2016, at 02:59 PM, Hongbin Lu wrote:
>> Hi folks,
>> 
>> It looks the swarm func test is currently unstable, which negatively 
>> impacts the patch submission workflow. I proposed to remove it from 
>> Jenkins gate (but keep it in Jenkins check), until it becomes stable.
>> Please find the details in the review
>> (https://review.openstack.org/#/c/264998/) and let me know if you have 
>> any concern.
>> 
> Removing it from gate but not from check doesn't necessarily help much 
> because you can only enter the gate pipeline once the change has a +1 from 
> Jenkins. Jenkins applies the +1 after check tests pass.
> 
> Clark
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-20 Thread Adrian Otto
This sounds like a source-of-truth concern. From my perspective the solution is 
not to create redundant quotas. Simply quota the Magnum resources. Lower level 
limits *could* be queried by magnum prior to acting to CRUD the lower level 
resources. In the case we could check the maximum allowed number of (or access 
rate of) whatever lower level resource before requesting it, and raising an 
understandable error. I see that as an enhancement rather than a must-have. In 
all honesty that feature is probably more complicated than it's worth in terms 
of value.

--
Adrian

On Dec 20, 2015, at 6:36 AM, Jay Lau 
> wrote:

I also have the same concern with Lee, as Magnum depend on HEAT  and HEAT need 
call nova, cinder, neutron to create the Bay resources. But both Nova and 
Cinder has its own quota policy, if we define quota again in Magnum, then how 
to handle the conflict? Another point is that limiting the Bay by quota seems a 
bit coarse-grainded as different bays may have different configuration and 
resource request. Comments? Thanks.

On Thu, Dec 17, 2015 at 4:10 AM, Lee Calcote 
> wrote:
Food for thought - there is a cost to FIPs (in the case of public IP 
addresses), security groups (to a lesser extent, but in terms of the 
computation of many hundreds of them), etc. Administrators may wish to enforce 
quotas on a variety of resources that are direct costs or indirect costs (e.g. 
# of bays, where a bay consists of a number of multi-VM / multi-host pods and 
services, which consume CPU, mem, etc.).

If Magnum quotas are brought forward, they should govern (enforce quota) on 
Magnum-specific constructs only, correct? Resources created by Magnum COEs 
should be governed by existing quota policies governing said resources (e.g. 
Nova and vCPUs).

Lee

On Dec 16, 2015, at 1:56 PM, Tim Bell 
> wrote:

-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: 15 December 2015 22:40
To: openstack-dev 
>
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
Resources

Hi! Can I offer a counter point?

Quotas are for _real_ resources.


The CERN container specialist agrees with you ... it would be good to
reflect on the needs given that ironic, neutron and nova are policing the
resource usage. Quotas in the past have been used for things like key pairs
which are not really real.

Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
cost
real money and cannot be conjured from thin air. As such, the user being
able to allocate 1 billion or 2 containers is not limited by Magnum, but
by real
things that they must pay for. If they have enough Nova quota to allocate
1
billion tiny pods, why would Magnum stop them? Who actually benefits from
that limitation?

So I suggest that you not add any detailed, complicated quota system to
Magnum. If there are real limitations to the implementation that Magnum
has chosen, such as we had in Heat (the entire stack must fit in memory),
then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
memory quotas be the limit, and enjoy the profit margins that having an
unbound force multiplier like Magnum in your cloud gives you and your
users!

Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
Hi All,

Currently, it is possible to create unlimited number of resource like
bay/pod/service/. In Magnum, there should be a limitation for user or
project to create Magnum resource, and the limitation should be
configurable[1].

I proposed following design :-

1. Introduce new table magnum.quotas
++--+--+-+-++

| Field  | Type | Null | Key | Default | Extra  |

++--+--+-+-++

| id | int(11)  | NO   | PRI | NULL| auto_increment |

| created_at | datetime | YES  | | NULL||

| updated_at | datetime | YES  | | NULL||

| deleted_at | datetime | YES  | | NULL||

| project_id | varchar(255) | YES  | MUL | NULL||

| resource   | varchar(255) | NO   | | NULL||

| hard_limit | int(11)  | YES  | | NULL||

| deleted| int(11)  | YES  | | NULL||

++--+--+-+-++

resource can be Bay, Pod, Containers, etc.


2. API controller for quota will be created to make sure basic CLI
commands work.

quota-show, quota-delete, quota-create, quota-update

3. When the admin specifies a quota of X number of resources to be
created the code should abide by that. For example if hard limit for Bay
is 5
(i.e.
a project can have maximum 5 

Re: [openstack-dev] [magnum]storage for docker-bootstrap

2015-12-18 Thread Adrian Otto
Wanghua,

I see. The circular dependency you described does sound like a formidable 
challenge. Having multiple docker daemons violates the principle of least 
surprise. I worry that when it comes time to perform troubleshooting, an 
engineer would be surprised to find multiple dockers running at the same time 
within the same compute instance.

Perhaps there is a way to generate the BIP and MTU before the docker daemon is 
started, then use those while starting docker, and start both flannel and etcd 
containers so all containers on the instance can share a single docker daemon? 
Would that work at all? I guess I’d need a better understanding of exactly how 
the BIP and MTU are generated before judging if this is a good idea.

Adrian

On Dec 16, 2015, at 11:40 PM, 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>> wrote:

Adrian,

When the docker daemon starts, it needs to know the bip and mtu which are 
generated by flannel. So flannel and etcd should start before docker daemon, 
but if flannel and etcd run in the same daemon, it introduces a circle. We need 
another docker daemon which is dedicated to flannel and etcd.

Regards
wanghua

On Mon, Dec 14, 2015 at 11:45 AM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:
Adrian,

Its a real shame Atomic can't execute its mission -  serve as a container 
operating system.  If you need some guidance on image building find experienced 
developers on #kolla – we have extensive experience in producing containers for 
various runtime environments focused around OpenStack.

Regards
-steve


From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, December 7, 2015 at 1:16 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap

Until I see evidence to the contrary, I think adding some bootstrap complexity 
to simplify the process of bay node image management and customization is worth 
it. Think about where most users will focus customization efforts. My guess is 
that it will be within these docker images. We should ask our team to keep 
things as simple as possible while working to containerize components where 
that makes sense. That may take some creativity and a few iterations to achieve.

We can pivot on this later if we try it and hate it.

Thanks,

Adrian

On Dec 7, 2015, at 1:57 AM, Kai Qiang Wu 
<wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>> wrote:


HI Hua,

From my point of view, not everything needed to be put in container. Let's make 
the initial start (be simple)to work and then discussed other options if needed 
in IRC or weekly meeting.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

王华 ---07/12/2015 10:10:38 am---Hi all, If we want to run etcd and 
flannel in container, we will introduce

From: 王华 <wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>>
To: Egor Guz <e...@walmartlabs.com<mailto:e...@walmartlabs.com>>
Cc: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 07/12/2015 10:10 am
Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap





Hi all,

If we want to run etcd and flannel in container, we will introduce 
docker-bootstrap which makes setup become more complex as Egor pointed out. 
Should we pay for the price?

On Sat, Nov 28, 2015 at 8:45 AM, Egor Guz 
<e...@walmartlabs.com<mailto:e...@walmartlabs.com>> wrote:

Wanghua,

I don’t think moving flannel to the container is good idea. This is setup great 
for dev environment, but become too complex from operator point of view (you 
add extra Docker daemon and need extra Cinder volume for this daemon, also
keep in mind it makes sense to keep etcd data folder at Cinder storage as well 
because etcd is database). flannel has just there files without extra 
dependencies and it’s much easy to download it during cloud-init ;)

I agree that we have pain with building Fedora Atomic images, but instead of 
simplify this process we should switch to another

Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Adrian Otto
Clint,

> On Dec 16, 2015, at 11:56 AM, Tim Bell  wrote:
> 
>> -Original Message-
>> From: Clint Byrum [mailto:cl...@fewbar.com]
>> Sent: 15 December 2015 22:40
>> To: openstack-dev 
>> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
>> Resources
>> 
>> Hi! Can I offer a counter point?
>> 
>> Quotas are for _real_ resources.

No. Beyond billable resources, quotas are a mechanism for limiting abusive use 
patterns from hostile users. The rate at which Bays are created, and how many 
of them you can have in total are important limits to put in the hands of cloud 
operators. Each Bay contains a keypair, which takes resources to generate and 
securely distribute. Updates to and Deletion of bays causes a storm of activity 
in Heat, and even more activity in Nova. Cloud operators should have the 
ability to control the rate of activity by enforcing rate controls on Magnum 
resources before they become problematic further down in the control plane. 
Admission controls are best managed at the entrance to a system, not at the 
core.

Adrian

> The CERN container specialist agrees with you ... it would be good to
> reflect on the needs given that ironic, neutron and nova are policing the
> resource usage. Quotas in the past have been used for things like key pairs
> which are not really real.
> 
>> Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
> cost
>> real money and cannot be conjured from thin air. As such, the user being
>> able to allocate 1 billion or 2 containers is not limited by Magnum, but
> by real
>> things that they must pay for. If they have enough Nova quota to allocate
> 1
>> billion tiny pods, why would Magnum stop them? Who actually benefits from
>> that limitation?
>> 
>> So I suggest that you not add any detailed, complicated quota system to
>> Magnum. If there are real limitations to the implementation that Magnum
>> has chosen, such as we had in Heat (the entire stack must fit in memory),
>> then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
>> memory quotas be the limit, and enjoy the profit margins that having an
>> unbound force multiplier like Magnum in your cloud gives you and your
>> users!
>> 
>> Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
>>> Hi All,
>>> 
>>> Currently, it is possible to create unlimited number of resource like
>>> bay/pod/service/. In Magnum, there should be a limitation for user or
>>> project to create Magnum resource, and the limitation should be
>>> configurable[1].
>>> 
>>> I proposed following design :-
>>> 
>>> 1. Introduce new table magnum.quotas
>>> ++--+--+-+-++
>>> 
>>> | Field  | Type | Null | Key | Default | Extra  |
>>> 
>>> ++--+--+-+-++
>>> 
>>> | id | int(11)  | NO   | PRI | NULL| auto_increment |
>>> 
>>> | created_at | datetime | YES  | | NULL||
>>> 
>>> | updated_at | datetime | YES  | | NULL||
>>> 
>>> | deleted_at | datetime | YES  | | NULL||
>>> 
>>> | project_id | varchar(255) | YES  | MUL | NULL||
>>> 
>>> | resource   | varchar(255) | NO   | | NULL||
>>> 
>>> | hard_limit | int(11)  | YES  | | NULL||
>>> 
>>> | deleted| int(11)  | YES  | | NULL||
>>> 
>>> ++--+--+-+-++
>>> 
>>> resource can be Bay, Pod, Containers, etc.
>>> 
>>> 
>>> 2. API controller for quota will be created to make sure basic CLI
>>> commands work.
>>> 
>>> quota-show, quota-delete, quota-create, quota-update
>>> 
>>> 3. When the admin specifies a quota of X number of resources to be
>>> created the code should abide by that. For example if hard limit for Bay
> is 5
>> (i.e.
>>> a project can have maximum 5 Bay's) if a user in a project tries to
>>> exceed that hardlimit it won't be allowed. Similarly goes for other
>> resources.
>>> 
>>> 4. Please note the quota validation only works for resources created
>>> via Magnum. Could not think of a way that Magnum to know if a COE
>>> specific utilities created a resource in background. One way could be
>>> to see the difference between whats stored in magnum.quotas and the
>>> information of the actual resources created for a particular bay in
> k8s/COE.
>>> 
>>> 5. Introduce a config variable to set quotas values.
>>> 
>>> If everyone agrees will start the changes by introducing quota
>>> restrictions on Bay creation.
>>> 
>>> Thoughts ??
>>> 
>>> 
>>> -Vilobh
>>> 
>>> [1] https://blueprints.launchpad.net/magnum/+spec/resource-quota
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> 

Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs

2015-12-16 Thread Adrian Otto
Tom,

> On Dec 16, 2015, at 9:31 AM, Cammann, Tom  wrote:
> 
> I don’t see a benefit from supporting the old API through a microversion 
> when the same functionality will be available through the native API.

+1

[snip]

> Have we had any discussion on adding a v2 API and what changes (beyond 
> removing pod, rc, service) we would include in that change. What sort of 
> timeframe would we expect to remove the v1 API. I would like to move to a 
> v2 in this cycle, then we can think about removing v1 in N.

Yes, when we drop functionality from the API that’s a contract breaking change, 
and requires a new API major version. We can drop the v1 API in N if we set 
expectations in advance. I’d want that plan to be supported with some evidence 
that maintaining the v1 API was burdensome in some way. Because adoption is 
limited, deprecation of v1 is not likely to be a contentious issue.

Adrian

> 
> Tom
> 
> 
> 
> On 16/12/2015, 15:57, "Hongbin Lu"  wrote:
> 
>> Hi Tom,
>> 
>> If I remember correctly, the decision is to drop the COE-specific API 
>> (Pod, Service, Replication Controller) in the next API version. I think a 
>> good way to do that is to put a deprecated warning in current API version 
>> (v1) for the removed resources, and remove them in the next API version 
>> (v2).
>> 
>> An alternative is to drop them in current API version. If we decide to do 
>> that, we need to bump the micro-version [1], and ask users to specify the 
>> microversion as part of the requests when they want the removed APIs.
>> 
>> [1] 
>> http://docs.openstack.org/developer/nova/api_microversions.html#removing-a
>> n-api-method
>> 
>> Best regards,
>> Hongbin
>> 
>> -Original Message-
>> From: Cammann, Tom [mailto:tom.camm...@hpe.com] 
>> Sent: December-16-15 8:21 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [magnum] Removing pod, rcs and service APIs
>> 
>> I have been noticing a fair amount of redundant work going on in magnum, 
>> python-magnumclient and magnum-ui with regards to APIs we have been 
>> intending to drop support for. During the Tokyo summit it was decided 
>> that we should support for only COE APIs that all COEs can support which 
>> means dropping support for Kubernetes specific APIs for Pod, Service and 
>> Replication Controller.
>> 
>> Egor has submitted a blueprint[1] “Unify container actions between all 
>> COEs” which has been approved to cover this work and I have submitted the 
>> first of many patches that will be needed to unify the APIs.
>> 
>> The controversial patches are here: 
>> https://review.openstack.org/#/c/258485/ and 
>> https://review.openstack.org/#/c/258454/
>> 
>> These patches are more a forcing function for our team to decide how to 
>> correctly deprecate these APIs as I mention there is a lot of redundant 
>> work going on these APIs. Please let me know your thoughts.
>> 
>> Tom
>> 
>> [1] https://blueprints.launchpad.net/magnum/+spec/unified-containers
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Adrian Otto

On Dec 16, 2015, at 2:25 PM, James Bottomley 
<james.bottom...@hansenpartnership.com<mailto:james.bottom...@hansenpartnership.com>>
 wrote:

On Wed, 2015-12-16 at 20:35 +0000, Adrian Otto wrote:
Clint,

On Dec 16, 2015, at 11:56 AM, Tim Bell 
<tim.b...@cern.ch<mailto:tim.b...@cern.ch>> wrote:

-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: 15 December 2015 22:40
To: openstack-dev 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
Resources

Hi! Can I offer a counter point?

Quotas are for _real_ resources.

No. Beyond billable resources, quotas are a mechanism for limiting
abusive use patterns from hostile users.

Actually, I believe this is the wrong way to look at it.  You're
confusing policy and mechanism.  Quotas are policy on resources.  The
mechanisms by which you implement quotas can also be used to limit
abuse by hostile users, but that doesn't mean that this limitation
should be part of the quota policy.

I’m not convinced. Cloud operators already use quotas as a mechanism for 
limiting abuse (intentional or accidental). They can be configured with a 
system wide default, and can be set to a different value on a per-tenant basis. 
It would be silly to have a second mechanism for doing the same thing we 
already use quotas for. Quotas/limits can also be queried by a user so they can 
determine why they are getting a 4XX Rate Limit responses when they try to act 
on resources too rapidly.

The idea of hard coding system wide limits into the system is making my stomach 
turn. If you wanted to change the limit you’d need to edit the production 
system’s configuration, and restart the API services. Yuck! That’s why we put 
quotas/limits into OpenStack to begin with, so that we had a sensible, visible, 
account-level configurable place to configure limits.

Adrian


For instance, in Linux, the memory limit policy is implemented by the
memgc.  The user usually sees a single figure for "memory" but inside
the cgroup, that memory is split into user and kernel.  Kernel memory
limiting prevents things like fork bombs because you run out of your
kernel memory limit creating task structures before you can bring down
the host system.  However, we don't usually expose the kernel/user
split or the fact that the kmem limit mechanism can prevent fork and
inode bombs.

James

The rate at which Bays are created, and how many of them you can
have in total are important limits to put in the hands of cloud
operators. Each Bay contains a keypair, which takes resources to
generate and securely distribute. Updates to and Deletion of bays
causes a storm of activity in Heat, and even more activity in Nova.
Cloud operators should have the ability to control the rate of
activity by enforcing rate controls on Magnum resources before they
become problematic further down in the control plane. Admission
controls are best managed at the entrance to a system, not at the
core.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-16 Thread Adrian Otto
Clint,

I think you are categorically dismissing a very real ops challenge of how to 
set correct system limits, and how to adjust them in a running system. I have 
been stung by this challenge repeatedly over the years. As developers we 
*guess* at what a sensible default for a value will be for a limit, but we are 
sometimes wrong. When we are, that guess has a very real, and very negative 
impact on users of production systems. The idea of using one limit for all 
users is idealistic. I’m convinced based on my experience that it's not the 
best approach in practice. What we usually want to do is bump up a limit for a 
single user, or dynamically drop a limit for all users. The problem is that 
very few systems implement limits in a way they can be adjusted while the 
system is running, and very rarely on a per-tenant basis. So yes, I will assert 
that having a quota implementation and the related complexity is justified by 
the ability to adapt limit levels while the system is running.

Think for a moment about the pain that an ops team goes through when they have 
to take a service down that’s affecting thousands or tens of thousands of 
users. We have to send zillions of emails to customers, we need to hold 
emergency change management meetings. We have to answer questions like “why 
didn’t you test for this?” when we did test for it, and it worked fine under 
simulation, but not in a real production environment under this particular 
stimulus. "Why can’t you take the system down in sections to keep the service 
up?" When the answer to all this is “because the developers never put 
themselves in the shoes of the ops team when they designed it.”

Those who know me will attest to the fact that I care deeply about applying the 
KISS principle. The principle guides us to keep our designs as simple as 
possible unless it’s essential to make them more complex. In this case, the 
complexity is justified.

Now if there are production ops teams for large scale systems that argue that 
dynamic limits and per-user overrides are pointless, then I’ll certainly 
reconsider my position.

Adrian

> On Dec 16, 2015, at 4:21 PM, Clint Byrum  wrote:
> 
> Excerpts from Fox, Kevin M's message of 2015-12-16 16:05:29 -0800:
>> Yeah, as an op, I've run into a few things that need quota's that just have 
>> basically hardcoded values. heat stacks for example. its a single global in 
>> /etc/heat/heat.conf:max_stacks_per_tenant=100. Instead of being able to 
>> tweak it for just our one project that legitimately has to create over 200 
>> stacks, I had to set it cloud wide and I had to bounce services to do it. 
>> Please don't do that.
>> 
>> Ideally, it would be nice if the quota stuff could be pulled out into its 
>> own shared lib  (oslo?) and shared amongst projects so that they don't have 
>> to spend much effort implementing quota's. Maybe then things that need 
>> quota's that don't currently can more easily get them.
>> 
> 
> You had to change a config value, once, and that's worse than the added
> code complexity and server load that would come from tracking quotas for
> a distributed service?
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs

2015-12-16 Thread Adrian Otto
Yes, this topic is a good one for a spec. What I am planning to do here is copy 
the content from the BP to an etherpad in spec format, and iterating on that in 
a fluid way to begin with. I will clear the BP whiteboard, and simplify the 
description to cover the intent and principles of the change. Once that gels a 
little we can contribute that for review as a spec and have more structured 
debate.

When we finish, we will have a concise blueprint, history of our debate in 
Gerrit, a merged spec, and then we can code it. The timing of this is 
unfortunate because several key stakeholders may be away for holidays over the 
next couple of weeks. We should proceed with caution.

Adrian

On Dec 16, 2015, at 5:11 PM, Kai Qiang Wu 
<wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>> wrote:


Hi Adrian,

Right now, I think:

for the unify-COE-container actions bp, it needs more discussion and good 
design to make it happen. ( I think spec is needed for this)
And for the k8s related objects deprecation, it needs backup instead of 
directly dropped it. Especially when we not have any spec or design come out 
for unify-COE-container bp.


Right now, the work now mostly happen on UI part, I think for UI, it can have 
discussion if need to implement those views or not.(instead we directly drop 
API part while not come out a consistent design on unify-COE-container actions 
bp)


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

Adrian Otto ---17/12/2015 07:00:37 am---Tom, > On Dec 16, 2015, at 
9:31 AM, Cammann, Tom <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>> wrote:

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 17/12/2015 07:00 am
Subject: Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs





Tom,

> On Dec 16, 2015, at 9:31 AM, Cammann, Tom 
> <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>> wrote:
>
> I don’t see a benefit from supporting the old API through a microversion
> when the same functionality will be available through the native API.

+1

[snip]

> Have we had any discussion on adding a v2 API and what changes (beyond
> removing pod, rc, service) we would include in that change. What sort of
> timeframe would we expect to remove the v1 API. I would like to move to a
> v2 in this cycle, then we can think about removing v1 in N.

Yes, when we drop functionality from the API that’s a contract breaking change, 
and requires a new API major version. We can drop the v1 API in N if we set 
expectations in advance. I’d want that plan to be supported with some evidence 
that maintaining the v1 API was burdensome in some way. Because adoption is 
limited, deprecation of v1 is not likely to be a contentious issue.

Adrian

>
> Tom
>
>
>
> On 16/12/2015, 15:57, "Hongbin Lu" 
> <hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
>
>> Hi Tom,
>>
>> If I remember correctly, the decision is to drop the COE-specific API
>> (Pod, Service, Replication Controller) in the next API version. I think a
>> good way to do that is to put a deprecated warning in current API version
>> (v1) for the removed resources, and remove them in the next API version
>> (v2).
>>
>> An alternative is to drop them in current API version. If we decide to do
>> that, we need to bump the micro-version [1], and ask users to specify the
>> microversion as part of the requests when they want the removed APIs.
>>
>> [1]
>> http://docs.openstack.org/developer/nova/api_microversions.html#removing-a
>> n-api-method
>>
>> Best regards,
>> Hongbin
>>
>> -Original Message-
>> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
>> Sent: December-16-15 8:21 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [magnum] Removing pod, rcs and service APIs
>>
>> I have been noticing a fair amount of redundant work going on in magnum,
>> python-magnumclient and magnum-ui with regards to APIs we have been
>> intending to drop support for. During the Tokyo summit it was decided
>> that we should support for o

Re: [openstack-dev] [magnum] Magnum conductor async container operations

2015-12-16 Thread Adrian Otto

> On Dec 16, 2015, at 6:24 PM, Joshua Harlow  wrote:
> 
> SURO wrote:
>> Hi all,
>> Please review and provide feedback on the following design proposal for
>> implementing the blueprint[1] on async-container-operations -
>> 
>> 1. Magnum-conductor would have a pool of threads for executing the
>> container operations, viz. executor_threadpool. The size of the
>> executor_threadpool will be configurable. [Phase0]
>> 2. Every time, Magnum-conductor(Mcon) receives a
>> container-operation-request from Magnum-API(Mapi), it will do the
>> initial validation, housekeeping and then pick a thread from the
>> executor_threadpool to execute the rest of the operations. Thus Mcon
>> will return from the RPC request context much faster without blocking
>> the Mapi. If the executor_threadpool is empty, Mcon will execute in a
>> manner it does today, i.e. synchronously - this will be the
>> rate-limiting mechanism - thus relaying the feedback of exhaustion.
>> [Phase0]
>> How often we are hitting this scenario, may be indicative to the
>> operator to create more workers for Mcon.
>> 3. Blocking class of operations - There will be a class of operations,
>> which can not be made async, as they are supposed to return
>> result/content inline, e.g. 'container-logs'. [Phase0]
>> 4. Out-of-order considerations for NonBlocking class of operations -
>> there is a possible race around condition for create followed by
>> start/delete of a container, as things would happen in parallel. To
>> solve this, we will maintain a map of a container and executing thread,
>> for current execution. If we find a request for an operation for a
>> container-in-execution, we will block till the thread completes the
>> execution. [Phase0]
> 
> Does whatever do these operations (mcon?) run in more than one process?

Yes, there may be multiple copies of magnum-conductor running on separate hosts.

> Can it be requested to create in one process then delete in another? If so is 
> that map some distributed/cross-machine/cross-process map that will be 
> inspected to see what else is manipulating a given container (so that the 
> thread can block until that is not the case... basically the map is acting 
> like a operation-lock?)

That’s how I interpreted it as well. This is a race prevention technique so 
that we don’t attempt to act on a resource until it is ready. Another way to 
deal with this is check the state of the resource, and return a “not ready” 
error if it’s not ready yet. If this happens in a part of the system that is 
unattended by a user, we can re-queue the call to retry after a minimum delay 
so that it proceeds only when the ready state is reached in the resource, or 
terminated after a maximum number of attempts, or if the resource enters an 
error state. This would allow other work to proceed while the retry waits in 
the queue.

> If it's just local in one process, then I have a library for u that can solve 
> the problem of correctly ordering parallel operations ;)

What we are aiming for is a bit more distributed. 

Adrian

>> This mechanism can be further refined to achieve more asynchronous
>> behavior. [Phase2]
>> The approach above puts a prerequisite that operations for a given
>> container on a given Bay would go to the same Magnum-conductor instance.
>> [Phase0]
>> 5. The hand-off between Mcon and a thread from executor_threadpool can
>> be reflected through new states on the 'container' object. These states
>> can be helpful to recover/audit, in case of Mcon restart. [Phase1]
>> 
>> Other considerations -
>> 1. Using eventlet.greenthread instead of real threads => This approach
>> would require further refactoring the execution code and embed yield
>> logic, otherwise a single greenthread would block others to progress.
>> Given, we will extend the mechanism for multiple COEs, and to keep the
>> approach straight forward to begin with, we will use 'threading.Thread'
>> instead of 'eventlet.greenthread'.
>> 
>> 
>> Refs:-
>> [1] -
>> https://blueprints.launchpad.net/magnum/+spec/async-container-operations
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-15 Thread Adrian Otto
Vilobh,

Thanks for advancing this important topic. I took a look at what Tim referenced 
how Nova is implementing nested quotas, and it seems to me that’s something we 
could fold in as well to our design. Do you agree?

Adrian

On Dec 14, 2015, at 10:22 PM, Tim Bell 
> wrote:

Can we have nested project quotas in from the beginning ? Nested projects are 
in Keystone V3 from Kilo onwards and retrofitting this is hard work.

For details, see the Nova functions at 
https://review.openstack.org/#/c/242626/. Cinder now also has similar functions.

Tim

From: Vilobh Meshram [mailto:vilobhmeshram.openst...@gmail.com]
Sent: 15 December 2015 01:59
To: OpenStack Development Mailing List (not for usage questions) 
>; 
OpenStack Mailing List (not for usage questions) 
>
Subject: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

Hi All,

Currently, it is possible to create unlimited number of resource like 
bay/pod/service/. In Magnum, there should be a limitation for user or project 
to create Magnum resource,
and the limitation should be configurable[1].

I proposed following design :-

1. Introduce new table magnum.quotas
++--+--+-+-++
| Field  | Type | Null | Key | Default | Extra  |
++--+--+-+-++
| id | int(11)  | NO   | PRI | NULL| auto_increment |
| created_at | datetime | YES  | | NULL||
| updated_at | datetime | YES  | | NULL||
| deleted_at | datetime | YES  | | NULL||
| project_id | varchar(255) | YES  | MUL | NULL||
| resource   | varchar(255) | NO   | | NULL||
| hard_limit | int(11)  | YES  | | NULL||
| deleted| int(11)  | YES  | | NULL||
++--+--+-+-++
resource can be Bay, Pod, Containers, etc.

2. API controller for quota will be created to make sure basic CLI commands 
work.
quota-show, quota-delete, quota-create, quota-update
3. When the admin specifies a quota of X number of resources to be created the 
code should abide by that. For example if hard limit for Bay is 5 (i.e. a 
project can have maximum 5 Bay's) if a user in a project tries to exceed that 
hardlimit it won't be allowed. Similarly goes for other resources.
4. Please note the quota validation only works for resources created via 
Magnum. Could not think of a way that Magnum to know if a COE specific 
utilities created a resource in background. One way could be to see the 
difference between whats stored in magnum.quotas and the information of the 
actual resources created for a particular bay in k8s/COE.
5. Introduce a config variable to set quotas values.
If everyone agrees will start the changes by introducing quota restrictions on 
Bay creation.
Thoughts ??

-Vilobh
[1] https://blueprints.launchpad.net/magnum/+spec/resource-quota
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mesos Conductor usingcontainer-createoperations

2015-12-12 Thread Adrian Otto

On Dec 12, 2015, at 9:19 AM, Ton Ngo > 
wrote:


Hi Hongbin,
The proposal sounds reasonable: basically it provides an agnostic alternative 
to the single command line that a user can invoke with docker or kubectl. If 
the user needs more advanced support (environment variables, volumes, network, 
...), we would defer to the COE support and the user would need to pick one.

I concur.

I also notice that the command does not specify a bay. If this is the 
intention, this could also cover another use case that I hear frequently when 
talking about Magnum:
"I just want to run some containers, I don't want to have to create a bay or 
figure out what goes into a bay model"
In this case, there is probably a default bay model and a default bay that 
would be created on the first invocation. The command would take some extra 
time the first time, but afterward it should be fast. The default configuration 
would come with Magnum, or be set by the cloud provider.

I like this idea.

Adrian

Ton,

Hongbin Lu ---12/10/2015 08:01:06 PM---Hi Ton, Thanks for the 
feedback. Here is a clarification. The proposal is neither for using existing

From: Hongbin Lu >
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: 12/10/2015 08:01 PM
Subject: Re: [openstack-dev] Mesos Conductor using container-create operations





Hi Ton,

Thanks for the feedback. Here is a clarification. The proposal is neither for 
using existing DSL to express a container, nor for investing a new DSL. 
Instead, I proposed to hide the complexity of existing DSLs and expose a simple 
API to users. For example, if users want to create a container, they could type 
something like:

magnum container-create –name XXX –image XXX –command XXX

Magnum will process the request and translate it to COE-specific API calls. For 
k8s, we could dynamically generate a pod with a single container and fill the 
pod with the inputted values (image, command, etc.). Similarly, in marathon, we 
could generate an app based on inputs. A key advantage of that is simple and 
doesn’t require COE-specific knowledge.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: December-10-15 8:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Mesos Conductor using container-create operations

I think extending the container object to Mesos via command like 
container-create is a fine idea. Going into details, however, we run into some 
complication.
1. The user would still have to choose a DSL to express the container. This 
would have to be a kube and/or swarm DSL since we don't want to invent a new 
one.
2. For Mesos bay in particular, kube or swarm may be running on top of Mesos 
along side with Marathon, so somewhere along the line, Magnum has to be able to 
make the distinction and handle things appropriately.

We should think through the scenarios carefully to come to agreement on how 
this would work.

Ton Ngo,


Hongbin Lu ---12/09/2015 03:09:23 PM---As Bharath mentioned, I am 
+1 to extend the "container" object to Mesos bay. In addition, I propose

From: Hongbin Lu >
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: 12/09/2015 03:09 PM
Subject: Re: [openstack-dev] Mesos Conductor using container-create operations






As Bharath mentioned, I am +1 to extend the “container” object to Mesos bay. In 
addition, I propose to extend “container” to k8s as well (the details are 
described in this BP [1]). The goal is to promote this API resource to be 
technology-agnostic and make it portable across all COEs. I am going to justify 
this proposal by a use case.

Use case:
I have an app. I used to deploy my app to a VM in OpenStack. Right now, I want 
to deploy my app to a container. I have basic knowledge of container but not 
familiar with specific container tech. I want a simple and intuitive API to 
operate a container (i.e. CRUD), like how I operated a VM before. I find it 
hard to learn the DSL introduced by a specific COE (k8s/marathon). Most 
importantly, I want my deployment to be portable regardless of the choice of 
cluster management system and/or container runtime. I want OpenStack to be the 
only integration point, because I don’t want to be locked-in to specific 
container tech. I want to avoid the risk that a specific container tech being 
replaced by another in the future. Optimally, I want Keystone to be the only 
authentication system that I need to deal with. I don't want the extra 
complexity to deal with additional authentication system introduced by specific 
COE.

Solution:
Implement "container" 

Re: [openstack-dev] [magnum]storage for docker-bootstrap

2015-12-07 Thread Adrian Otto
Until I see evidence to the contrary, I think adding some bootstrap complexity 
to simplify the process of bay node image management and customization is worth 
it. Think about where most users will focus customization efforts. My guess is 
that it will be within these docker images. We should ask our team to keep 
things as simple as possible while working to containerize components where 
that makes sense. That may take some creativity and a few iterations to achieve.

We can pivot on this later if we try it and hate it.

Thanks,

Adrian

On Dec 7, 2015, at 1:57 AM, Kai Qiang Wu 
> wrote:


HI Hua,

From my point of view, not everything needed to be put in container. Let's make 
the initial start (be simple)to work and then discussed other options if needed 
in IRC or weekly meeting.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

王华 ---07/12/2015 10:10:38 am---Hi all, If we want to run etcd and 
flannel in container, we will introduce

From: 王华 >
To: Egor Guz >
Cc: 
"openstack-dev@lists.openstack.org" 
>
Date: 07/12/2015 10:10 am
Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap





Hi all,

If we want to run etcd and flannel in container, we will introduce 
docker-bootstrap which makes setup become more complex as Egor pointed out. 
Should we pay for the price?

On Sat, Nov 28, 2015 at 8:45 AM, Egor Guz 
> wrote:

Wanghua,

I don’t think moving flannel to the container is good idea. This is setup great 
for dev environment, but become too complex from operator point of view (you 
add extra Docker daemon and need extra Cinder volume for this daemon, also
keep in mind it makes sense to keep etcd data folder at Cinder storage as well 
because etcd is database). flannel has just there files without extra 
dependencies and it’s much easy to download it during cloud-init ;)

I agree that we have pain with building Fedora Atomic images, but instead of 
simplify this process we should switch to another more “friendly” images (e.g. 
Fedora/CentOS/Ubuntu) which we can easy build with disk builder.
Also we can fix CoreOS template (I believe people more asked about it instead 
of Atomic), but we may face similar to Atomic issues when we will try to 
integrate not CoreOS products (e.g. Calico or Weave)

—
Egor

From: 王华 
>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>>
Date: Thursday, November 26, 2015 at 00:15
To: "OpenStack Development Mailing List (not for usage questions)" 
>>
Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap

Hi Hongbin,

The docker in master node stores data in /dev/mapper/atomicos-docker--data and 
metadata in /dev/mapper/atomicos-docker--meta. 
/dev/mapper/atomicos-docker--data and /dev/mapper/atomicos-docker--meta are 
logic volumes. The docker in minion node store data in the cinder volume, but 
/dev/mapper/atomicos-docker--meta and /dev/mapper/atomicos-docker--meta are not 
used. If we want to leverage Cinder volume for docker in master, should we drop 
/dev/mapper/atomicos-docker--meta and /dev/mapper/atomicos-docker--meta? I 
think it is not necessary to allocate a Cinder volume. It is enough to allocate 
two logic volumes for docker, because only etcd, flannel, k8s run in the docker 
daemon which need not a large amount of storage.

Best regards,
Wanghua

On Thu, Nov 26, 2015 at 12:40 AM, Hongbin Lu 
>>
 wrote:
Here is a bit more context.

Currently, at k8s and swarm bay, some required binaries (i.e. etcd and flannel) 
are built into image and run at host. We are exploring the possibility to 
containerize some of these system components. The rationales are (i) it is 
infeasible to build 

Re: [openstack-dev] [magnum][api] Looking for a Cross-Project Liaison for the API Working Group

2015-12-02 Thread Adrian Otto
Thanks team for stepping up to fill this important role. Please let me know if 
there is anything I can do to assist you.

Adrian

On Dec 2, 2015, at 2:19 PM, Everett Toews 
<everett.to...@rackspace.com<mailto:everett.to...@rackspace.com>> wrote:

On Dec 2, 2015, at 12:32 AM, 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>> wrote:

Adrian,
I would like to be an alternate.

Regards
Wanghua


On Wed, Dec 2, 2015 at 10:19 AM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
Everett,

Thanks for reaching out. Eli is a good choice for this role. We should also 
identify an alternate as well.

Adrian

--
Adrian

> On Dec 1, 2015, at 6:15 PM, Qiao,Liyong 
> <liyong.q...@intel.com<mailto:liyong.q...@intel.com>> wrote:
>
> hi Everett
> I'd like to take it.
>
> thanks
> Eli.

Great!

Eli and Wanghua, clone the api-wg repo as you would any repo and add yourselves 
to this file

http://git.openstack.org/cgit/openstack/api-wg/tree/doc/source/liaisons.json

Please make sure you use your name *exactly* as it appears in Gerrit. It should 
be the same as the name that appears in the Reviewer field on any review in 
Gerrit. Also, double check that you have only one account in Gerrit.

If you need help, just ask in #openstack-sdks where the API WG hangs out on IRC.

Cheers,
Everett

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][api] Looking for a Cross-Project Liaison for the API Working Group

2015-12-01 Thread Adrian Otto
Everett,

Thanks for reaching out. Eli is a good choice for this role. We should also 
identify an alternate as well.

Adrian

--
Adrian

> On Dec 1, 2015, at 6:15 PM, Qiao,Liyong  wrote:
> 
> hi Everett
> I'd like to take it.
> 
> thanks
> Eli.
> 
>> On 2015年12月02日 05:18, Everett Toews wrote:
>> Hello Magnumites,
>> 
>> The API Working Group [1] is looking for a Cross-Project Liaison [2] from 
>> the Magnum project.
>> 
>> What does such a role entail?
>> 
>> The API Working Group seeks API subject matter experts for each project to 
>> communicate plans for API updates, review API guidelines with their 
>> project’s view in mind, and review the API Working Group guidelines as they 
>> are drafted. The Cross-Project Liaison (CPL) should be familiar with the 
>> project’s REST API design and future planning for changes to it.
>> 
>> Please let us know if you're interested and we'll bring you on board!
>> 
>> Cheers,
>> Everett
>> 
>> [1] http://specs.openstack.org/openstack/api-wg/
>> [2] http://specs.openstack.org/openstack/api-wg/liaisons.html
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -- 
> BR, Eli(Li Yong)Qiao
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Why set DEFAULT_DOCKER_TIMEOUT = 10 in docker client?

2015-11-24 Thread Adrian Otto
Li Yong,

At any rate, this should not be hardcoded. I agree that the default value 
should match the RPC timeout.

Adrian

> On Nov 24, 2015, at 11:23 PM, Qiao,Liyong  wrote:
> 
> hi all
> In Magnum code, we hardcode it as DEFAULT_DOCKER_TIMEOUT = 10
> This bring troubles in some bad networking environment (or bad performance 
> swarm master)
> At least it doesn't work on our gate.
> 
> Here is the test patch on gate https://review.openstack.org/249522 , I set it 
> as 180 to make sure
> the failure it due to time_out parameter passed to docker client, but we need 
> to chose a suitble one
> 
> I check docker client's default value,
> DEFAULT_TIMEOUT_SECONDS = 60 , I wonder why we overwrite it  as 10?
> 
> Please let me know what's your though? My suggestion is we set 
> DEFAULT_DOCKER_TIMEOUT
> as long as our rpc time_out.
> 
> -- 
> BR, Eli(Li Yong)Qiao
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-19 Thread Adrian Otto
I’m open to allowing magnum to pass a blob of data (such as a lump of JSON or 
YAML) to the Bay's native API. That approach strikes a balance that’s 
appropriate.

Adrian

On Nov 19, 2015, at 10:01 AM, bharath thiruveedula 
<bharath_...@hotmail.com<mailto:bharath_...@hotmail.com>> wrote:

Hi,

At the present scenario, we can have mesos conductor with existing 
attributes[1]. Or we can add  extra options like 'portMappings', 'instances', 
'uris'[2]. And the other options is to take json file as input to 'magnum 
container-create' and dispatch it to corresponding conductor. And the conductor 
will handle the json input. Let me know your opinions.


Regards
Bharath T




[1]https://goo.gl/f46b4H
[2]https://mesosphere.github.io/marathon/docs/application-basics.html

To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
From: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Date: Thu, 19 Nov 2015 10:47:33 +0800
Subject: Re: [openstack-dev] [magnum] Mesos Conductor

@bharath,

1) actually, if you mean use container-create(delete) to do on mesos bay for 
apps. I am not sure how different the interface between docker interface and 
mesos interface. One point that when you introduce that feature, please not 
make docker container interface more complicated than now. I worried that 
because it would confuse end-users a lot than the unified benefits. (maybe as 
optional parameter to pass one json file to create containers in mesos)

2) For the unified interface, I think it need more thoughts, we need not bring 
more trouble to end-users to learn new concepts or interfaces, except we could 
have more clear interface, but different COES vary a lot. It is very challenge.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for bharath thiruveedula ---19/11/2015 10:31:58 
am---@hongin, @adrian I agree with you. So can we go ahea]bharath thiruveedula 
---19/11/2015 10:31:58 am---@hongin, @adrian I agree with you. So can we go 
ahead with magnum container-create(delete) ... for

From:  bharath thiruveedula 
<bharath_...@hotmail.com<mailto:bharath_...@hotmail.com>>
To:  OpenStack Development Mailing List not for usage questions 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date:  19/11/2015 10:31 am
Subject:  Re: [openstack-dev] [magnum] Mesos Conductor





@hongin, @adrian I agree with you. So can we go ahead with magnum 
container-create(delete) ... for mesos bay (which actually create 
mesos(marathon) app internally)?

@jay, yes we multiple frameworks which are using mesos lib. But the mesos bay 
we are creating uses marathon. And we had discussion in irc on this topic, and 
I was asked to implement initial version for marathon. And agree with you to 
have unified client interface for creating pod,app.

Regards
Bharath T


Date: Thu, 19 Nov 2015 10:01:35 +0800
From: jay.lau@gmail.com<mailto:jay.lau@gmail.com>
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Mesos Conductor

+1.

One problem I want to mention is that for mesos integration, we cannot limited 
to Marathon + Mesos as there are many frameworks can run on top of Mesos, such 
as Chronos, Kubernetes etc, we may need to consider more for Mesos integration 
as there is a huge eco-system build on top of Mesos.

On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:

Bharath,

I agree with Hongbin on this. Let’s not expand magnum to deal with apps or 
appgroups in the near term. If there is a strong desire to add these things, we 
could allow it by having a plugin/extensions interface for the Magnum API to 
allow additional COE specific features. Honestly, it’s just going to be a 
nuisance to keep up with the various upstreams until they become completely 
stable from an API perspective, and no additional changes are likely. All of 
our COE’s still have plenty of maturation ahead of them, so this is the wrong 
time to wrap them.

If someone really wants apps and appgroups, (s)he could add that to an 
experimental branch of the magnum client, and have it interact with the 
marathon API directly rather than trying to represent those resources in 
Magnum. If that tool became popular, then we could revisit this topic for 
further consideration

Re: [openstack-dev] [magnum] Issue on history of renamed file/folder

2015-11-19 Thread Adrian Otto
As I see this, we need to pick the better of two options, even when neither is 
perfect. I’d rather have magnum’s source as intuitive and easy to maintain as 
possible. If it becomes more difficult to follow the commit history for a file 
in order to achieve that improvement, I’m willing to live with it. In truth, 
following the commit history of a file is not something we do often in our 
development workflows, so it does not need to be optimized. On the other hand, 
looking at the contents of our source tree is something that all of us do 
often, and we deserve to have that be nice and clear.

Adrian

On Nov 19, 2015, at 1:13 AM, Tom Cammann 
> wrote:

This is a defect with Github and should not affect our ability to fix defects 
and correct/refactor our code. git is a CLI tool not a GUI tool and should be 
treated as such. We should not be imposing restrictions on our developers 
because a 3rd party GUI does not fit our workflows.

Tom

On 18/11/15 22:48, Hongbin Lu wrote:
Hi team,

I would like to start this ML to discuss the git rename issue. Here is the 
problem. In Git, it is handy to retrieve commit history of a file/folder. There 
are several ways to do that. In CLI, you can run “git log …” to show the 
history. In Github, you can click “History” bottom on top of the file. The 
history of a file is traced back to the commit in which the file was created or 
renamed. In other words, renaming a file will cut the commit history of the 
file. If you want to trace the full history of a renamed file, in CLI, you can 
use “git log –follow …”. However, this feature is not supported in Github.

A way to mitigate the issue is to avoid renaming file/folder if it is not for 
fixing a functional defect (e.g. for improving the naming style). If we do 
that, we sacrifice quality of file/folder names to get a more traceable 
history. On the other hand, if we don’t do that, we have to tolerant the 
history disconnection in Github. I want to discuss which solution is preferred? 
Or there is a better way to handle it?

Best regards,
Hongbin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread Adrian Otto
Bharath,

I agree with Hongbin on this. Let’s not expand magnum to deal with apps or 
appgroups in the near term. If there is a strong desire to add these things, we 
could allow it by having a plugin/extensions interface for the Magnum API to 
allow additional COE specific features. Honestly, it’s just going to be a 
nuisance to keep up with the various upstreams until they become completely 
stable from an API perspective, and no additional changes are likely. All of 
our COE’s still have plenty of maturation ahead of them, so this is the wrong 
time to wrap them.

If someone really wants apps and appgroups, (s)he could add that to an 
experimental branch of the magnum client, and have it interact with the 
marathon API directly rather than trying to represent those resources in 
Magnum. If that tool became popular, then we could revisit this topic for 
further consideration.

Adrian

> On Nov 18, 2015, at 3:21 PM, Hongbin Lu  wrote:
> 
> Hi Bharath,
>  
> I agree the “container” part. We can implement “magnum container-create ..” 
> for mesos bay in the way you mentioned. Personally, I don’t like to introduce 
> “apps” and “appgroups” resources to Magnum, because they are already provided 
> by native tool [1]. I couldn’t see the benefits to implement a wrapper API to 
> offer what native tool already offers. However, if you can point out a valid 
> use case to wrap the API, I will give it more thoughts.
>  
> Best regards,
> Hongbin
>  
> [1] https://docs.mesosphere.com/using/cli/marathonsyntax/
>  
> From: bharath thiruveedula [mailto:bharath_...@hotmail.com] 
> Sent: November-18-15 1:20 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [magnum] Mesos Conductor
>  
> Hi all,
>  
> I am working on the blueprint [1]. As per my understanding, we have two 
> resources/objects in mesos+marathon:
>  
> 1)Apps: combination of instances/containers running on multiple hosts 
> representing a service.[2]
> 2)Application Groups: Group of apps, for example we can have database 
> application group which consists mongoDB app and MySQL App.[3]
>  
> So I think we need to have two resources 'apps' and 'appgroups' in mesos 
> conductor like we have pod and rc for k8s. And regarding 'magnum container' 
> command, we can create, delete and retrieve container details as part of 
> mesos app itself(container =  app with 1 instance). Though I think in mesos 
> case 'magnum app-create ..."  and 'magnum container-create ...' will use the 
> same REST API for both cases. 
>  
> Let me know your opinion/comments on this and correct me if I am wrong
>  
> [1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.
> [2]https://mesosphere.github.io/marathon/docs/application-basics.html
> [3]https://mesosphere.github.io/marathon/docs/application-groups.html
>  
>  
> Regards
> Bharath T 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

2015-11-11 Thread Adrian Otto
Eli,

I like this proposed approach. We did have a discussion with a few Stackers 
from openstack-infra in Tokyo to express our interest in using bare metal for 
gate testing. That’s still a way out, but that may be another way to speed this 
up further. A third idea would be to adjust the nova virt driver in our 
devstack image to use libvirt/lxc by default (instead of libvirt/kvm) which 
would allow for bays to be created more rapidly. This would potentially allow 
us to to perform repeated bay creations int he same pipeline in a reasonable 
timeframe.

Adrian

On Nov 11, 2015, at 11:02 PM, Qiao,Liyong 
> wrote:

hello all:

I will update some Magnum functional testing status, functional/integration 
testing
is important to us, since we change/modify the Heat template rapidly, we need to
verify the modification is correct, so we need to cover all templates Magnum 
has.
and currently we only has k8s testing(only test with atomic image), we need to
add more, like swarm(WIP), mesos(under plan), also , we may need to support COS 
image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it after 
testing.

for each stage, the time costing is follows:

  *   devstack prepare: 5-6 mins
  *   Running devstack: 15 mins(include downloading atomic image)
  *   1) and 2) 15 mins
  *   3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see 
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html
for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack setup 
will take 20 mins already.

To reduce time, I suggest to only create 1 bay each pipeline and do vary kinds 
of testing
on this bay, if want to test some specify bay (for example, network_driver 
etc), create
a new pipeline .

So, I think we can delete 2), since 3) will do similar things(create/delete), 
the different is
3) use tls_disabled=False. what do you think ?
see https://review.openstack.org/244378 for the time costing, will reduce to 45 
min (48m 50s in the example.)

=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

  *   gate-functional-dsvm-magnum-api 30 mins
  *   gate-functional-dsvm-magnum-k8s 60 mins

And for swam pipeline, patches is done, under reviewing now(works fine on gate)
https://review.openstack.org/244391
https://review.openstack.org/226125



--
BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] competing implementations

2015-11-05 Thread Adrian Otto
Sometimes producing alternate implementations can be more effective than 
abstract discussions because they are more concrete. If an implementation can 
be produced (possibly multiple different implementations by different 
contributors) in a short period of time without significant effort, that’s 
usually better than a lengthy discussion. Keep in mind that even a WIP review 
can be helpful for facilitating this sort of a discussion. Having a talk about 
a specific review is usually much more effective than when the discussion is 
happening completely in abstract terms.

Keep in mind that many OpenStack contributors speak English as a second 
language. They may actually be much more effective in expressing their ideas in 
code rather than in the form of a debate. Using alternate implementations for 
something is one way to let these contributors shine with a novel idea, even if 
they struggle to articulate themselves or feel uncomfortable in a verbal debate.

If you are about to go implement something that takes a significant effort, 
then it would be annoying to have an alternate implantation show up and you’ll 
feel like your work goes to waste. The way to prevent this is to encourage all 
active contributors to share ideas in the project IRC channel, and show up 
regularly to the team meetings, and covey your intent to the technical lead. If 
you are surprised by alternate implementations for your work, that’s a symptom 
that one or more of you are not well coordinated. If we solve that, everyone 
can potentially move more quickly. Anyone struggling with this problem might 
consider the guidance I offered in Vancouver [1].

Adrian

[1] 
https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/7-habits-of-highly-effective-contributors

On Nov 4, 2015, at 7:04 PM, Vikas Choudhary 
> wrote:


If we see from the angle of the contributor whose approach would not be better 
than other competing one, it will be far easy for him to accept logic at 
discussion stage rather after weeks of tracking review request and addressing 
review comments.

On 5 Nov 2015 08:24, "Vikas Choudhary" 
> wrote:

@Toni ,

In scenarios where two developers, with different implementation approaches, 
are not able to reach any consensus over gerrit or ml, IMO, other core members 
can do a voting or discussion and then PTL should take a call which one to 
accept and allow for implementation. Anyways community has to make a call even 
after implementations, so why to unnecessary waste effort in implementation.
WDYT?

On 4 Nov 2015 19:35, "Baohua Yang" 
> wrote:
Sure, thanks!
And suggest add the time and channel information at the kuryr wiki page.


On Wed, Nov 4, 2015 at 9:45 PM, Antoni Segura Puimedon 
> wrote:


On Wed, Nov 4, 2015 at 2:38 PM, Baohua Yang 
> wrote:
+1, Antoni!
btw, is our weekly meeting still on meeting-4 channel?
Not found it there yesterday.

Yes, it is still on openstack-meeting-4, but this week we skipped it, since 
some of us were
traveling and we already held the meeting on Friday. Next Monday it will be 
held as usual
and the following week we start alternating (we have yet to get a room for that 
one).

On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon 
> wrote:
Hi Kuryrs,

Last Friday, as part of the contributors meetup, we discussed also code 
contribution etiquette. Like other OpenStack project (Magnum comes to mind), 
the etiquette for what to do when there is disagreement in the way to code a 
blueprint of fix a bug is as follows:

1.- Try to reach out so that the original implementation gets closer to a 
compromise by having the discussion in gerrit (and Mailing list if it requires 
a wider range of arguments).
2.- If a compromise can't be reached, feel free to make a separate 
implementation arguing well its difference, virtues and comparative 
disadvantages. We trust the whole community of reviewers to be able to judge 
which is the best implementation and I expect that often the reviewers will 
steer both submissions closer than they originally were.
3.- If both competing implementations get the necessary support, the core 
reviewers will take a specific decision on which to take based on technical 
merit. Important factor are:
* conciseness,
* simplicity,
* loose coupling,
* logging and error reporting,
* test coverage,
* extensibility (when an immediate pending and blueprinted feature can 
better be built on top of it).
* documentation,
* performance.

It is important to remember that technical disagreement is a healthy thing and 
should be tackled with civility. If we follow the rules 

Re: [openstack-dev] magnum on OpenStack Kilo

2015-11-02 Thread Adrian Otto
Bruce,

That sounds like this bug to me:

https://bugs.launchpad.net/magnum/+bug/1411333

Resolved by:

https://review.openstack.org/148059

I think you need this:


keystone service-create --name=magnum \
--type=container \
--description="magnum Container Service"
keystone endpoint-create --service=magnum \
 --publicurl=http://127.0.0.1:9511/v1 \
 --internalurl=http://127.0.0.1:9511/v1 \
 --adminurl=http://127.0.0.1:9511/v1 \
 --region RegionOne

Any chance you missed the first of these two? Also, be sure you are using the 
latest Magnum, either from the master branch or from the Downloads section of:

https://wiki.openstack.org/wiki/Magnum

Thanks,

Adrain


On Nov 2, 2015, at 2:25 PM, Bruce D'Amora 
> wrote:

Does anyone have any guidance for configuring magnum on OpenStack kilo? this is 
outside of devstack. I thought I had it configured and when I log into horizon, 
I see the magnum service is started, but when I execute cli commands such as:
magnum service-list or magnum container-list I get ERRORs:
ERROR: publicURL endpoint for container service not found

I added an endpoint:
openstack endpoint create \
  --publicurl http://9.2.132.246:9511/v1 \
  --internalurl http://9.2.132.246:9511/v1 \
  --adminurl http://9.2.132.246:9511/v1 \
  --region RegionOne \
  magnum

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] magnum on OpenStack Kilo

2015-11-02 Thread Adrian Otto
Bruce,

Another suggestion for your consideration:

The region the client is using needs to match the region the endpoint is set to 
use in the service catalog. Check that OS_REGION_NAME in the environment 
running the client is set to ‘RegionOne’ rather than ‘regionOne’. That has 
snagged others in the past as well.

Adrian

On Nov 2, 2015, at 4:22 PM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:

Bruce,

That sounds like this bug to me:

https://bugs.launchpad.net/magnum/+bug/1411333

Resolved by:

https://review.openstack.org/148059

I think you need this:


keystone service-create --name=magnum \
--type=container \
--description="magnum Container Service"
keystone endpoint-create --service=magnum \
 --publicurl=http://127.0.0.1:9511/v1 \
 --internalurl=http://127.0.0.1:9511/v1 \
 --adminurl=http://127.0.0.1:9511/v1 \
 --region RegionOne

Any chance you missed the first of these two? Also, be sure you are using the 
latest Magnum, either from the master branch or from the Downloads section of:

https://wiki.openstack.org/wiki/Magnum

Thanks,

Adrain


On Nov 2, 2015, at 2:25 PM, Bruce D'Amora 
<bddam...@gmail.com<mailto:bddam...@gmail.com>> wrote:

Does anyone have any guidance for configuring magnum on OpenStack kilo? this is 
outside of devstack. I thought I had it configured and when I log into horizon, 
I see the magnum service is started, but when I execute cli commands such as:
magnum service-list or magnum container-list I get ERRORs:
ERROR: publicURL endpoint for container service not found

I added an endpoint:
openstack endpoint create \
  --publicurl http://9.2.132.246:<http://9.2.132.246/>9511/v1 \
  --internalurl http://9.2.132.246:<http://9.2.132.246/>9511/v1 \
  --adminurl http://9.2.132.246:<http://9.2.132.246/>9511/v1 \
  --region RegionOne \
  magnum

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   >