Re: [openstack-dev] [magnum] High Availability

2016-04-21 Thread Hongbin Lu
Ricardo,

That is great! It is good to hear Magnum works well in your side.

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: April-21-16 1:48 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hi.
> 
> The thread is a month old, but I sent a shorter version of this to
> Daneyon before with some info on the things we dealt with to get Magnum
> deployed successfully. We wrapped it up in a post (there's a video
> linked there with some demos at the end):
> 
> http://openstack-in-production.blogspot.ch/2016/04/containers-and-cern-
> cloud.html
> 
> Hopefully the pointers to the relevant blueprints for some of the
> issues we found will be useful for others.
> 
> Cheers,
>   Ricardo
> 
> On Fri, Mar 18, 2016 at 3:42 PM, Ricardo Rocha <rocha.po...@gmail.com>
> wrote:
> > Hi.
> >
> > We're running a Magnum pilot service - which means it's being
> > maintained just like all other OpenStack services and running on the
> > production infrastructure, but only available to a subset of tenants
> > for a start.
> >
> > We're learning a lot in the process and will happily report on this
> in
> > the next couple weeks.
> >
> > The quick summary is that it's looking good and stable with a few
> > hicks in the setup, which are handled by patches already under review.
> > The one we need the most is the trustee user (USER_TOKEN in the bay
> > heat params is preventing scaling after the token expires), but with
> > the review in good shape we look forward to try it very soon.
> >
> > Regarding barbican we'll keep you posted, we're working on the
> missing
> > puppet bits.
> >
> > Ricardo
> >
> > On Fri, Mar 18, 2016 at 2:30 AM, Daneyon Hansen (danehans)
> > <daneh...@cisco.com> wrote:
> >> Adrian/Hongbin,
> >>
> >> Thanks for taking the time to provide your input on this matter.
> After reviewing your feedback, my takeaway is that Magnum is not ready
> for production without implementing Barbican or some other future
> feature such as the Keystone option Adrian provided.
> >>
> >> All,
> >>
> >> Is anyone using Magnum in production? If so, I would appreciate your
> input.
> >>
> >> -Daneyon Hansen
> >>
> >>> On Mar 17, 2016, at 6:16 PM, Adrian Otto <adrian.o...@rackspace.com>
> wrote:
> >>>
> >>> Hongbin,
> >>>
> >>> One alternative we could discuss as an option for operators that
> have a good reason not to use Barbican, is to use Keystone.
> >>>
> >>> Keystone credentials store:
> >>> http://specs.openstack.org/openstack/keystone-
> specs/api/v3/identity-
> >>> api-v3.html#credentials-v3-credentials
> >>>
> >>> The contents are stored in plain text in the Keystone DB, so we
> would want to generate an encryption key per bay, encrypt the
> certificate and store it in keystone. We would then use the same key to
> decrypt it upon reading the key back. This might be an acceptable
> middle ground for clouds that will not or can not run Barbican. This
> should work for any OpenStack cloud since Grizzly. The total amount of
> code in Magnum would be small, as the API already exists. We would need
> a library function to encrypt and decrypt the data, and ideally a way
> to select different encryption algorithms in case one is judged weak at
> some point in the future, justifying the use of an alternate.
> >>>
> >>> Adrian
> >>>
> >>>> On Mar 17, 2016, at 4:55 PM, Adrian Otto
> <adrian.o...@rackspace.com> wrote:
> >>>>
> >>>> Hongbin,
> >>>>
> >>>>> On Mar 17, 2016, at 2:25 PM, Hongbin Lu <hongbin...@huawei.com>
> wrote:
> >>>>>
> >>>>> Adrian,
> >>>>>
> >>>>> I think we need a boarder set of inputs in this matter, so I
> moved the discussion from whiteboard back to here. Please check my
> replies inline.
> >>>>>
> >>>>>> I would like to get a clear problem statement written for this.
> >>>>>> As I see it, the problem is that there is no safe place to put
> certificates in clouds that do not run Barbican.
> >>>>>> It seems the solution is to make it easy to add Barbican such
> that it's included in the setup for Magnum.
> >>>>> No, the solution is to explore an non-Barbican solut

Re: [openstack-dev] [magnum] High Availability

2016-04-21 Thread Ricardo Rocha
;>>> Magnum should not be in the business of credential storage when there is 
>>>>>> an existing service focused on that need.
>>>>>>
>>>>>> Is there an issue with running Barbican on older clouds?
>>>>>> Anyone can choose to use the builtin option with Magnum if hey don't 
>>>>>> have Barbican.
>>>>>> A known limitation of that approach is that certificates are not 
>>>>>> replicated.
>>>>> I guess the *builtin* option you referred is simply placing the 
>>>>> certificates to local file system. A few of us had concerns on this 
>>>>> approach (In particular, Tom Cammann has gave -2 on the review [1]) 
>>>>> because it cannot scale beyond a single conductor. Finally, we made a 
>>>>> compromise to land this option and use it for testing/debugging only. In 
>>>>> other words, this option is not for production. As a result, Barbican 
>>>>> becomes the only option for production which is the root of the problem. 
>>>>> It basically forces everyone to install Barbican in order to use Magnum.
>>>>>
>>>>> [1] https://review.openstack.org/#/c/212395/
>>>>>
>>>>>> It's probably a bad idea to replicate them.
>>>>>> That's what Barbican is for. --adrian_otto
>>>>> Frankly, I am surprised that you disagreed here. Back to July 2015, we 
>>>>> all agreed to have two phases of implementation and the statement was 
>>>>> made by you [2].
>>>>>
>>>>> 
>>>>> #agreed Magnum will use Barbican for an initial implementation for 
>>>>> certificate generation and secure storage/retrieval.  We will commit to a 
>>>>> second phase of development to eliminating the hard requirement on 
>>>>> Barbican with an alternate implementation that implements the functional 
>>>>> equivalent implemented in Magnum, which may depend on libraries, but not 
>>>>> Barbican.
>>>>> 
>>>>>
>>>>> [2] 
>>>>> http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html
>>>>
>>>> The context there is important. Barbican was considered for two purposes: 
>>>> (1) CA signing capability, and (2) certificate storage. My willingness to 
>>>> implement an alternative was based on our need to get a certificate 
>>>> generation and signing solution that actually worked, as Barbican did not 
>>>> work for that at the time. I have always viewed Barbican as a suitable 
>>>> solution for certificate storage, as that was what it was first designed 
>>>> for. Since then, we have implemented certificate generation and signing 
>>>> logic within a library that does not depend on Barbican, and we can use 
>>>> that safely in production use cases. What we don’t have built in is what 
>>>> Barbican is best at, secure storage for our certificates that will allow 
>>>> multi-conductor operation.
>>>>
>>>> I am opposed to the idea that Magnum should re-implement Barbican for 
>>>> certificate storage just because operators are reluctant to adopt it. If 
>>>> we need to ship a Barbican instance along with each Magnum control plane, 
>>>> so be it, but I don’t see the value in re-inventing the wheel. I promised 
>>>> the OpenStack community that we were out to integrate with and enhance 
>>>> OpenStack not to replace it.
>>>>
>>>> Now, with all that said, I do recognize that not all clouds are motivated 
>>>> to use all available security best practices. They may be operating in 
>>>> environments that they believe are already secure (because of a secure 
>>>> perimeter), and that it’s okay to run fundamentally insecure software 
>>>> within those environments. As misguided as this viewpoint may be, it’s 
>>>> common. My belief is that it’s best to offer the best practice by default, 
>>>> and only allow insecure operation when someone deliberately turns of 
>>>> fundamental security features.
>>>>
>>>> With all this said, I also care about Magnum adoption as much as all of 
>>>> us, so I’d like us to think creatively about how to strike the right 
>>>> balance between re-implementing existing technology, and making that 

Re: [openstack-dev] [magnum] High Availability

2016-03-22 Thread Adrian Otto
Team,

Time to close down this thread and start a new one. I’m going to change the 
subject line, and start with a summary. Please restrict further discussion on 
this thread to the subject of High Availability.

Thanks,

Adrian

On Mar 22, 2016, at 11:52 AM, Daneyon Hansen (danehans) 
<daneh...@cisco.com<mailto:daneh...@cisco.com>> wrote:



From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, March 21, 2016 at 8:19 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

Tim,

Thanks for your advice. I respect your point of view and we will definitely 
encourage our users to try Barbican if they see fits. However, for the sake of 
Magnum, I think we have to decouple from Barbican at current stage. The 
coupling of Magnum and Barbican will increase the size of the system by two (1 
project -> 2 project), which will significant increase the overall complexities.
· For developers, it incurs significant overheads on development, 
quality assurance, and maintenance.
· For operators, it doubles the amount of efforts of deploying and 
monitoring the system.
· For users, a large system is likely to be unstable and fragile which 
affects the user experience.
In my point of view, I would like to minimize the system we are going to ship, 
so that we can reduce the overheads of maintenance and provides a stable system 
to our users.

I noticed that there are several suggestions to “force” our users to install 
Barbican, which I would respectfully disagree. Magnum is a young project and we 
are struggling to increase the adoption rate. I think we need to be nice to our 
users, otherwise, they will choose our competitors (there are container service 
everywhere). Please understand that we are not a mature project, like Nova, who 
has thousands of users. We really don’t have the power to force our users to do 
what they don’t like to do.

I also recognized there are several disagreements from the Barbican team. Per 
my understanding, most of the complaints are about the re-invention of Barbican 
equivalent functionality in Magnum. To address that, I am going to propose an 
idea to achieve the goal without duplicating Barbican. In particular, I suggest 
to add support for additional authentication system (Keystone in particular) 
for our Kubernetes bay (potentially for swarm/mesos). As a result, users can 
specify how to secure their bay’s API endpoint:
· TLS: This option requires Barbican to be installed for storing the 
TLS certificates.
· Keystone: This option doesn’t require Barbican. Users will use their 
OpenStack credentials to log into Kubernetes.

I believe this is a sensible option that addresses the original problem 
statement in [1]:

"Magnum currently controls Kubernetes API services using unauthenticated HTTP. 
If an attacker knows the api_address of a Kubernetes Bay, (s)he can control the 
cluster without any access control."

The [1] problem statement is authenticating the bay API endpoint, not 
encrypting it. With the option you propose, we can leave the existing 
tls-disabled attribute alone and continue supporting encryption. Using Keystone 
to authenticate the Kubernetes API already exists outside of Magnum in 
Hypernetes [2]. We will need to investigate support for the other coe types.

[1] https://github.com/openstack/magnum/blob/master/specs/tls-support-magnum.rst
[2] http://thenewstack.io/hypernetes-brings-multi-tenancy-microservices/



I am going to send another ML to describe the details. You are welcome to 
provide your inputs. Thanks.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: March-19-16 5:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Saturday 19 March 2016 at 04:52
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

...
If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.


There is a risk that we use decisions made by other projec

Re: [openstack-dev] [magnum] High Availability

2016-03-22 Thread Daneyon Hansen (danehans)


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, March 21, 2016 at 8:19 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

Tim,

Thanks for your advice. I respect your point of view and we will definitely 
encourage our users to try Barbican if they see fits. However, for the sake of 
Magnum, I think we have to decouple from Barbican at current stage. The 
coupling of Magnum and Barbican will increase the size of the system by two (1 
project -> 2 project), which will significant increase the overall complexities.

· For developers, it incurs significant overheads on development, 
quality assurance, and maintenance.

· For operators, it doubles the amount of efforts of deploying and 
monitoring the system.

· For users, a large system is likely to be unstable and fragile which 
affects the user experience.
In my point of view, I would like to minimize the system we are going to ship, 
so that we can reduce the overheads of maintenance and provides a stable system 
to our users.

I noticed that there are several suggestions to “force” our users to install 
Barbican, which I would respectfully disagree. Magnum is a young project and we 
are struggling to increase the adoption rate. I think we need to be nice to our 
users, otherwise, they will choose our competitors (there are container service 
everywhere). Please understand that we are not a mature project, like Nova, who 
has thousands of users. We really don’t have the power to force our users to do 
what they don’t like to do.

I also recognized there are several disagreements from the Barbican team. Per 
my understanding, most of the complaints are about the re-invention of Barbican 
equivalent functionality in Magnum. To address that, I am going to propose an 
idea to achieve the goal without duplicating Barbican. In particular, I suggest 
to add support for additional authentication system (Keystone in particular) 
for our Kubernetes bay (potentially for swarm/mesos). As a result, users can 
specify how to secure their bay’s API endpoint:

· TLS: This option requires Barbican to be installed for storing the 
TLS certificates.

· Keystone: This option doesn’t require Barbican. Users will use their 
OpenStack credentials to log into Kubernetes.

I believe this is a sensible option that addresses the original problem 
statement in [1]:

"Magnum currently controls Kubernetes API services using unauthenticated HTTP. 
If an attacker knows the api_address of a Kubernetes Bay, (s)he can control the 
cluster without any access control."

The [1] problem statement is authenticating the bay API endpoint, not 
encrypting it. With the option you propose, we can leave the existing 
tls-disabled attribute alone and continue supporting encryption. Using Keystone 
to authenticate the Kubernetes API already exists outside of Magnum in 
Hypernetes [2]. We will need to investigate support for the other coe types.

[1] https://github.com/openstack/magnum/blob/master/specs/tls-support-magnum.rst
[2] http://thenewstack.io/hypernetes-brings-multi-tenancy-microservices/



I am going to send another ML to describe the details. You are welcome to 
provide your inputs. Thanks.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: March-19-16 5:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Saturday 19 March 2016 at 04:52
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

...
If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.


There is a risk that we use decisions made by other projects to justify how 
Magnum is implemented. Heat was created 3 years ago according to 
https://www.openstack.org/software/project-navigator/ and Barbican only 2 years 
ago, thus Barbican may not have been an option (or a high risk one).

Barbican has demonstrated that the project has corporate diversity and good 
stability 
(https://www.openstack.org/software

Re: [openstack-dev] [magnum] High Availability

2016-03-22 Thread Ian Cordasco
 

-Original Message-
From: Hongbin Lu <hongbin...@huawei.com>
Reply: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Date: March 21, 2016 at 22:22:01
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject:  Re: [openstack-dev] [magnum] High Availability

> Tim,
>  
> Thanks for your advice. I respect your point of view and we will definitely 
> encourage  
> our users to try Barbican if they see fits. However, for the sake of Magnum, 
> I think we have  
> to decouple from Barbican at current stage. The coupling of Magnum and 
> Barbican will  
> increase the size of the system by two (1 project -> 2 project), which will 
> significant  
> increase the overall complexities.

Hi Hongbin,

I think you're missing the fact that Tim represents a very large and very 
visible user of OpenStack, CERN.

> · For developers, it incurs significant overheads on development, quality 
> assurance,  
> and maintenance.

Are you sure? It seems like barbican isn't a common problem among developers. 
It's more of a problem for operators because the dependency is very poorly 
documented.

> · For operators, it doubles the amount of efforts of deploying and monitoring 
> the system.  

This makes operators sound a bit ... primitive in how they deploy things. That 
seems quite unfair. CERN is using Puppet-OpenStack which might need help to 
improve it's Magnum and Barbican puppet modules, but I doubt this is a big 
concern for them. People using the new ansible roles will notice similar gaps 
given their age, but these playbooks transparently provide everything to deploy 
and monitor the system. It's no more difficult for them to deploy both magnum 
and barbican than it is to deploy one or the other. I'm sure the Chef OpenStack 
efforts are also similarly easy to add to an existing OpenStack deployment.

The only people who might have problems with deploying the two in conjunction 
are people following the install guide and using system packages *only* without 
automation. I think it's also fair to say that this group of people are not 
your majority of operators. Further, given the lack of install guide content 
for magnum, I find it doubtful people are performing magnum installs by hand 
like this.

Do you have real operator feedback complaining about this or is this a concern 
you're anticipating?

> · For users, a large system is likely to be unstable and fragile which 
> affects the user  
> experience.
> In my point of view, I would like to minimize the system we are going to 
> ship, so that we can  
> reduce the overheads of maintenance and provides a stable system to our users.

Except you are only shipping Magnum and the Barbican team is shipping Barbican. 
OpenStack is less stable because it has separate services for the core compute 
portion. Further, Nova should apparently have its own way of accepting uploads 
for and managing images as well as block storage management because depending 
on Glance and Cinder for that is introducing fragility and *potential* 
instability.

OpenStack relies on other services and their teams of subject matter experts 
for good reason. It's because no service should manage every last thing itself 
when another service exists that can and is doing that in a better manner.

> I noticed that there are several suggestions to “force” our users to install 
> Barbican,  
> which I would respectfully disagree. Magnum is a young project and we are 
> struggling  
> to increase the adoption rate. I think we need to be nice to our users, 
> otherwise, they  
> will choose our competitors (there are container service everywhere). Please 
> understand  
> that we are not a mature project, like Nova, who has thousands of users. We 
> really don’t  
> have the power to force our users to do what they don’t like to do.

Why are you attributing all of your adoption issues to needing Barbican? One 
initial barrier to my evaluation of Magnum was its lack of documentation that 
is geared towards operators at all. The next barrier was the client claiming it 
supported Keystone V3 and not actually doing so (which was admittedly easily 
fixed). Putting all the blame on Barbican is a bit bizarre from my point of 
view as someone who has and is deploying Magnum.

> I also recognized there are several disagreements from the Barbican team. Per 
> my understanding,  
> most of the complaints are about the re-invention of Barbican equivalent 
> functionality  
> in Magnum. To address that, I am going to propose an idea to achieve the goal 
> without duplicating  
> Barbican. In particular, I suggest to add support for additional 
> authentication system  
> (Keystone in particular) for our Kubernetes bay (potentially for 
> swarm/mesos). As 

Re: [openstack-dev] [magnum] High Availability

2016-03-21 Thread Hongbin Lu
Tim,

Thanks for your advice. I respect your point of view and we will definitely 
encourage our users to try Barbican if they see fits. However, for the sake of 
Magnum, I think we have to decouple from Barbican at current stage. The 
coupling of Magnum and Barbican will increase the size of the system by two (1 
project -> 2 project), which will significant increase the overall complexities.

· For developers, it incurs significant overheads on development, 
quality assurance, and maintenance.

· For operators, it doubles the amount of efforts of deploying and 
monitoring the system.

· For users, a large system is likely to be unstable and fragile which 
affects the user experience.
In my point of view, I would like to minimize the system we are going to ship, 
so that we can reduce the overheads of maintenance and provides a stable system 
to our users.

I noticed that there are several suggestions to “force” our users to install 
Barbican, which I would respectfully disagree. Magnum is a young project and we 
are struggling to increase the adoption rate. I think we need to be nice to our 
users, otherwise, they will choose our competitors (there are container service 
everywhere). Please understand that we are not a mature project, like Nova, who 
has thousands of users. We really don’t have the power to force our users to do 
what they don’t like to do.

I also recognized there are several disagreements from the Barbican team. Per 
my understanding, most of the complaints are about the re-invention of Barbican 
equivalent functionality in Magnum. To address that, I am going to propose an 
idea to achieve the goal without duplicating Barbican. In particular, I suggest 
to add support for additional authentication system (Keystone in particular) 
for our Kubernetes bay (potentially for swarm/mesos). As a result, users can 
specify how to secure their bay’s API endpoint:

· TLS: This option requires Barbican to be installed for storing the 
TLS certificates.

· Keystone: This option doesn’t require Barbican. Users will use their 
OpenStack credentials to log into Kubernetes.

I am going to send another ML to describe the details. You are welcome to 
provide your inputs. Thanks.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: March-19-16 5:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Saturday 19 March 2016 at 04:52
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

...
If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.


There is a risk that we use decisions made by other projects to justify how 
Magnum is implemented. Heat was created 3 years ago according to 
https://www.openstack.org/software/project-navigator/ and Barbican only 2 years 
ago, thus Barbican may not have been an option (or a high risk one).

Barbican has demonstrated that the project has corporate diversity and good 
stability 
(https://www.openstack.org/software/releases/liberty/components/barbican). 
There are some areas that could be improved (packaging and puppet modules are 
often needing some more investment).

I think it is worth a go to try it out and have concrete areas to improve if 
there are problems.

Tim

If you don’t like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
<douglas.mendiza...@rackspace.com<mailto:douglas.mendiza...@rackspace.com>> 
wrote:
[snip]
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>

I believe that using Keystone for this is a mistake. As mentioned in the 
blueprint, Keystone is not encrypting the data so magnum would be on the hook 
to do it. So that means that if security is a requirement you'd have to 
duplicate more than just code. magnum would sta

Re: [openstack-dev] [magnum] High Availability

2016-03-20 Thread Hongbin Lu
The Magnum team discussed Anchor several times (in the design summit/midcycle). 
According to what I remembered, the conclusion is to leverage Anchor though 
Barbican (presumably there is an Anchor backend for Barbican). Is Anchor 
support in Barbican still in the roadmap?

Best regards,
Hongbin

> -Original Message-
> From: Clark, Robert Graham [mailto:robert.cl...@hpe.com]
> Sent: March-20-16 1:57 AM
> To: maishsk+openst...@maishsk.com; OpenStack Development Mailing List
> (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> At the risk of muddying the waters further, I recently chatted with
> some of you about Anchor, it's an ephemeral PKI system setup to provide
> private community PKI - certificate services for internal systems, a
> lot like k8 pods.
> 
> An overview of why revocation doesn't work very well in many cases and
> how ephemeral PKI helps: https://openstack-
> security.github.io/tooling/2016/01/20/ephemeral-pki.html
> 
> First half of a threat analysis on Anchor, the Security Project's
> implementation of ephemeral PKI: https://openstack-
> security.github.io/threatanalysis/2016/02/07/anchorTA.html
> 
> This might not solve your problem, it's certainly not a direct drop in
> for Barbican (and it never will be) but if your primary concern is
> Certificate Management for internal systems (not presenting
> certificates over the edge of the cloud) you might find some of it's
> properties valuable. Not least, it's trivial to HA being stateless and
> it's trivial to deploy being a single Pecan service.
> 
> There's a reasonably complete deck on Anchor here:
> https://docs.google.com/presentation/d/1HDyEiSA5zp6HNdDZcRAYMT5GtxqkHrx
> brqDRzITuSTc/edit?usp=sharing
> 
> And of course, code over here:
> http://git.openstack.org/cgit/openstack/anchor
> 
> Cheers
> -Rob
> 
> > -Original Message-
> > From: Maish Saidel-Keesing [mailto:mais...@maishsk.com]
> > Sent: 19 March 2016 18:10
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] High Availability
> >
> > Forgive me for the top post and also for asking the obvious (with my
> > Operator hat on)
> >
> > Relying on an external service for certificate store - is the best
> > option - assuming of course that the certificate store is actually
> > also highly available.
> >
> > Is that the case today with Barbican?
> >
> > According to the architecture docs [1] I see that they are using a
> > relational database. MySQL? PostgreSQL? Does that now mean we have an
> > additional database to maintain, backup, provide HA for as an
> Operator?
> >
> > The only real reference I can see to anything remotely HA is this [2]
> > and this [3]
> >
> > An overall solution is highly available *only* if all of the parts it
> > relies are also highly available as well.
> >
> >
> > [1]
> >
> http://docs.openstack.org/developer/barbican/contribute/architecture.h
> > tml#overall-architecture [2]
> > https://github.com/cloudkeep-ops/barbican-vagrant-zero
> > [3]
> > http://lists.openstack.org/pipermail/openstack/2014-March/006100.html
> >
> > Some food for thought
> >
> > --
> > Best Regards,
> > Maish Saidel-Keesing
> >
> >
> > On 03/18/16 17:18, Hongbin Lu wrote:
> > > Douglas,
> > >
> > > I am not opposed to adopt Barbican in Magnum (In fact, we already
> > > adopted Barbican). What I am opposed to is a Barbican lock-in,
> which
> > already has a negative impact on Magnum adoption based on our
> > feedback. I also want to see an increase of Barbican adoption in the
> future, and all our users have Barbican installed in their clouds. If
> that happens, I have no problem to have a hard dependency on Barbican.
> > >
> > > Best regards,
> > > Hongbin
> > >
> > > -Original Message-
> > > From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com]
> > > Sent: March-18-16 9:45 AM
> > > To: openstack-dev@lists.openstack.org
> > > Subject: Re: [openstack-dev] [magnum] High Availability
> > >
> > > Hongbin,
> > >
> > > I think Adrian makes some excellent points regarding the adoption
> of
> > > Barbican.  As the PTL for Barbican, it's frustrating to me to
> > constantly hear from other projects that securing their sensitive
> data
> > is a requirement but then turn around and say that deploying Barbican
> is a problem.
> > >
> > > I guess I'm having a hard time understanding the

Re: [openstack-dev] [magnum] High Availability

2016-03-20 Thread Hongbin Lu
Thanks for your inputs. It sounds like we have no other option besides Barbican 
as long as we need to store credentials in Magnum. Then I have a new proposal: 
switch to an alternative authentication mechanism that doesn't require to store 
credentials in Magnum. For example, the following options are available in 
Kubernetes [1]:

· Client certificate authentication

· Token File

· OpenID Connect ID Token

· Basic authentication

· Keystone authentication

Could we pick one of those?

[1] http://kubernetes.io/docs/admin/authentication/

Best regards,
Hongbin

From: Dave McCowan (dmccowan) [mailto:dmcco...@cisco.com]
Sent: March-19-16 10:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


The most basic requirement here for Magnum is that it needs a safe place to 
store credentials.  A safe place can not be provided by just a library or even 
by just a daemon.  Secure storage is provided by either hardware solution (an 
HSM) or a software solution (SoftHSM, DogTag, IPA, IdM).  A project should give 
a variety of secure storage options to the user.

On this, we have competing requirements.  Devs need a turnkey option for easy 
testing locally or in the gate.  Users kicking the tires want a realistic 
solution they try out easily with DevStack.  Operators who already have secure 
storage deployed for their cloud want an option that plugs into their existing 
HSMs.

Any roll-your-own option is not going to meet all of these requirements.

A good example, that does meet all of these requirements, is the key manager 
implementation in Nova and Cinder. [1] [2]

Nova and Cinder work together to provide volume encryption, and like Magnum, 
have a need to store and share keys securely.  Using a plugin architecture, and 
the Barbican API, they implement a variety of key storage options:
- Fixed key allows for insecure stand alone operation, running only Nova and 
Cinder
- Barbican with static key, allows for easy deployment that can be started 
within DevStack by few lines of config.
- Barbican with a secure backend, allows for production grade secure storage of 
keys that has been tested on a variety of HSMs and software options.

Barbican's adoption is growing.  Nova, Cinder, Neutron LBaaS, Sahara, and 
Magnum all have implementations using Barbican.  Swift and DNSSec also have use 
cases.  There are both RPM and Debian packages available for Barbican.  There 
are (at least tech preview)  versions of puppet modules, Ansible playbooks, and 
DevStack plugins to deploy Barbican.

In summary, I think using Barbican absorbs the complexity of doing secure 
storage correctly.  It gives operators production grade secure storage options, 
while giving devs easier options.

--Dave McCowan

[1] https://github.com/openstack/nova/tree/master/nova/keymgr
[2] https://github.com/openstack/cinder/tree/master/cinder/keymgr

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, March 18, 2016 at 10:52 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

OK. If using Keystone is not acceptable, I am going to propose a new approach:

? Store data in Magnum DB

? Encrypt data before writing it to DB

? Decrypt data after loading it from DB

? Have the encryption/decryption key stored in config file

? Use encryption/decryption algorithm provided by a library

The approach above is the exact approach used by Heat to protect hidden 
parameters [1]. Compared to the Barbican option, this approach is much lighter 
and simpler, and provides a basic level of data protection. This option is a 
good supplement to the Barbican option, which is heavy but provides advanced 
level of protection. It will fit into the use cases that users don't want to 
install Barbican but want a basic protection.

If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.

If you don't like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 

Re: [openstack-dev] [magnum] High Availability

2016-03-20 Thread Clark, Robert Graham
At the risk of muddying the waters further, I recently chatted with some of you 
about Anchor, it's an ephemeral PKI system setup to provide private community 
PKI - certificate services for internal systems, a lot like k8 pods.

An overview of why revocation doesn't work very well in many cases and how 
ephemeral PKI helps: 
https://openstack-security.github.io/tooling/2016/01/20/ephemeral-pki.html

First half of a threat analysis on Anchor, the Security Project's 
implementation of ephemeral PKI: 
https://openstack-security.github.io/threatanalysis/2016/02/07/anchorTA.html

This might not solve your problem, it's certainly not a direct drop in for 
Barbican (and it never will be) but if your primary concern is Certificate 
Management for internal systems (not presenting certificates over the edge of 
the cloud) you might find some of it's properties valuable. Not least, it's 
trivial to HA being stateless and it's trivial to deploy being a single Pecan 
service.

There's a reasonably complete deck on Anchor here:
https://docs.google.com/presentation/d/1HDyEiSA5zp6HNdDZcRAYMT5GtxqkHrxbrqDRzITuSTc/edit?usp=sharing

And of course, code over here:
http://git.openstack.org/cgit/openstack/anchor

Cheers
-Rob

> -Original Message-
> From: Maish Saidel-Keesing [mailto:mais...@maishsk.com]
> Sent: 19 March 2016 18:10
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Forgive me for the top post and also for asking the obvious (with my
> Operator hat on)
> 
> Relying on an external service for certificate store - is the best
> option - assuming of course that the certificate store is actually also
> highly available.
> 
> Is that the case today with Barbican?
> 
> According to the architecture docs [1] I see that they are using a
> relational database. MySQL? PostgreSQL? Does that now mean we have an
> additional database to maintain, backup, provide HA for as an Operator?
> 
> The only real reference I can see to anything remotely HA is this [2]
> and this [3]
> 
> An overall solution is highly available *only* if all of the parts it
> relies are also highly available as well.
> 
> 
> [1]
> http://docs.openstack.org/developer/barbican/contribute/architecture.html#overall-architecture
> [2] https://github.com/cloudkeep-ops/barbican-vagrant-zero
> [3] http://lists.openstack.org/pipermail/openstack/2014-March/006100.html
> 
> Some food for thought
> 
> --
> Best Regards,
> Maish Saidel-Keesing
> 
> 
> On 03/18/16 17:18, Hongbin Lu wrote:
> > Douglas,
> >
> > I am not opposed to adopt Barbican in Magnum (In fact, we already adopted 
> > Barbican). What I am opposed to is a Barbican lock-in, which
> already has a negative impact on Magnum adoption based on our feedback. I 
> also want to see an increase of Barbican adoption in the
> future, and all our users have Barbican installed in their clouds. If that 
> happens, I have no problem to have a hard dependency on Barbican.
> >
> > Best regards,
> > Hongbin
> >
> > -Original Message-
> > From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com]
> > Sent: March-18-16 9:45 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [magnum] High Availability
> >
> > Hongbin,
> >
> > I think Adrian makes some excellent points regarding the adoption of 
> > Barbican.  As the PTL for Barbican, it's frustrating to me to
> constantly hear from other projects that securing their sensitive data is a 
> requirement but then turn around and say that deploying Barbican
> is a problem.
> >
> > I guess I'm having a hard time understanding the operator persona that is 
> > willing to deploy new services with security features but
> unwilling to also deploy the service that is meant to secure sensitive data 
> across all of OpenStack.
> >
> > I understand one barrier to entry for Barbican is the high cost of Hardware 
> > Security Modules, which we recommend as the best option for
> the Storage and Crypto backends for Barbican.  But there are also other 
> options for securing Barbican using open source software like
> DogTag or SoftHSM.
> >
> > I also expect Barbican adoption to increase in the future, and I was hoping 
> > that Magnum would help drive that adoption.  There are also
> other projects that are actively developing security features like Swift 
> Encryption, and DNSSEC support in Desginate.  Eventually these
> features will also require Barbican, so I agree with Adrian that we as a 
> community should be encouraging deployers to adopt the best
> security practices.
> >
> > Regarding the Keystone solution, I'd l

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Steven Dake (stdake)


On 3/18/16, 12:59 PM, "Fox, Kevin M" <kevin@pnnl.gov> wrote:

>+1. We should be encouraging a common way of solving these issues across
>all the openstack projects and security is a really important thing.
>spreading it across lots of projects causes more bugs and security
>related bugs cause security incidents. No one wants those.
>
>I'd also like to know why, if an old cloud is willing to deploy a new
>magnum, its unreasonable to deploy a new barbican at the same time.
>
>If its a technical reason, lets fix the issue. If its something else,
>lets discuss it. If its just an operator not wanting to install 2 things
>instead of just one, I think its a totally understandable, but
>unreasonable request.

Kevin,

I think the issue comes down to "how" the common way of solving this
problem should be approached.  In barbican's case a daemon and database
are required.  What I wanted early on with Magnum when I was involved was
a library approach.

Having maintained a deployment project for 2 years, I can tell you each
time we add a new big tent project it adds a bunch of footprint to our
workload.  Operators typically don't even have a tidy deployment tool like
Kolla to work with.  As an example, ceilometer has had containers
available in Kolla for 18 months yet nobody has finished the job on
implementing ceilometer playbooks, even though ceilometer is a soft
dependency of heat for autoscaling.

Many Operators self-deploy so they understand how the system operates.
They lack the ~200 contributors Kolla has to maintain a deployment tool,
and as such, I really don't think the idea that deploying "Y to get X when
Y could and should be a small footprint library" is unreasonable.

Regards,
-steve
  
>
>Thanks,
>Kevin
>
>From: Douglas Mendizábal [douglas.mendiza...@rackspace.com]
>Sent: Friday, March 18, 2016 6:45 AM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [magnum] High Availability
>
>Hongbin,
>
>I think Adrian makes some excellent points regarding the adoption of
>Barbican.  As the PTL for Barbican, it's frustrating to me to constantly
>hear from other projects that securing their sensitive data is a
>requirement but then turn around and say that deploying Barbican is a
>problem.
>
>I guess I'm having a hard time understanding the operator persona that
>is willing to deploy new services with security features but unwilling
>to also deploy the service that is meant to secure sensitive data across
>all of OpenStack.
>
>I understand one barrier to entry for Barbican is the high cost of
>Hardware Security Modules, which we recommend as the best option for the
>Storage and Crypto backends for Barbican.  But there are also other
>options for securing Barbican using open source software like DogTag or
>SoftHSM.
>
>I also expect Barbican adoption to increase in the future, and I was
>hoping that Magnum would help drive that adoption.  There are also other
>projects that are actively developing security features like Swift
>Encryption, and DNSSEC support in Desginate.  Eventually these features
>will also require Barbican, so I agree with Adrian that we as a
>community should be encouraging deployers to adopt the best security
>practices.
>
>Regarding the Keystone solution, I'd like to hear the Keystone team's
>feadback on that.  It definitely sounds to me like you're trying to put
>a square peg in a round hole.
>
>- Doug
>
>On 3/17/16 8:45 PM, Hongbin Lu wrote:
>> Thanks Adrian,
>>
>>
>>
>> I think the Keystone approach will work. For others, please speak up if
>> it doesn¹t work for you.
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
>> *Sent:* March-17-16 9:28 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [magnum] High Availability
>>
>>
>>
>> Hongbin,
>>
>>
>>
>> I tweaked the blueprint in accordance with this approach, and approved
>> it for Newton:
>>
>> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
>>
>>
>>
>> I think this is something we can all agree on as a middle ground, If
>> not, I¹m open to revisiting the discussion.
>>
>>
>>
>> Thanks,
>>
>>
>>
>> Adrian
>>
>>
>>
>> On Mar 17, 2016, at 6:13 PM, Adrian Otto <adrian.o...@rackspace.com
>> <mailto:adrian.o...@rackspace.com>> wrote:
>>
>>
>>
>> Hongbin,
>>
>> One alternative we could discu

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Ricardo Rocha
Hi.

We're on the way, the API is using haproxy load balancing in the same
way all openstack services do here - this part seems to work fine.

For the conductor we're stopped due to bay certificates - we don't
currently have barbican so local was the only option. To get them
accessible on all nodes we're considering two options:
- store bay certs in a shared filesystem, meaning a new set of
credentials in the boxes (and a process to renew fs tokens)
- deploy barbican (some bits of puppet missing we're sorting out)

More news next week.

Cheers,
Ricardo

On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans)
 wrote:
> All,
>
> Does anyone have experience deploying Magnum in a highly-available fashion?
> If so, I’m interested in learning from your experience. My biggest unknown
> is the Conductor service. Any insight you can provide is greatly
> appreciated.
>
> Regards,
> Daneyon Hansen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread David Stanek
On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal <
douglas.mendiza...@rackspace.com> wrote:

> [snip]
> >
> > Regarding the Keystone solution, I'd like to hear the Keystone team's
> feadback on that.  It definitely sounds to me like you're trying to put a
> square peg in a round hole.
> >
>
>
I believe that using Keystone for this is a mistake. As mentioned in the
blueprint, Keystone is not encrypting the data so magnum would be on the
hook to do it. So that means that if security is a requirement you'd have
to duplicate more than just code. magnum would start having a larger
security burden. Since we have a system designed to securely store data I
think that's the best place for data that needs to be secure.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Fox, Kevin M
Yeah, I get that. I've got some sizeable deployments too.

But in the case of using a library, your scattering all the security bits 
around the various services and it just pushes the burden to securing it, 
patching all the services, etc some place else. Its better then each project 
rolling their own security solution for sure. but if your deploying the system 
securely, I don't think it really is less of a burden. You switch out having to 
figure out how to deploy an extra service with having to pay careful attention 
to every other service to secure them more carefully. I'd argue it should be 
easier to deploy the centralized service then doing it across the other 
services.

Thanks,
Kevin 

From: Steven Dake (stdake) [std...@cisco.com]
Sent: Friday, March 18, 2016 1:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

On 3/18/16, 12:59 PM, "Fox, Kevin M" <kevin@pnnl.gov> wrote:

>+1. We should be encouraging a common way of solving these issues across
>all the openstack projects and security is a really important thing.
>spreading it across lots of projects causes more bugs and security
>related bugs cause security incidents. No one wants those.
>
>I'd also like to know why, if an old cloud is willing to deploy a new
>magnum, its unreasonable to deploy a new barbican at the same time.
>
>If its a technical reason, lets fix the issue. If its something else,
>lets discuss it. If its just an operator not wanting to install 2 things
>instead of just one, I think its a totally understandable, but
>unreasonable request.

Kevin,

I think the issue comes down to "how" the common way of solving this
problem should be approached.  In barbican's case a daemon and database
are required.  What I wanted early on with Magnum when I was involved was
a library approach.

Having maintained a deployment project for 2 years, I can tell you each
time we add a new big tent project it adds a bunch of footprint to our
workload.  Operators typically don't even have a tidy deployment tool like
Kolla to work with.  As an example, ceilometer has had containers
available in Kolla for 18 months yet nobody has finished the job on
implementing ceilometer playbooks, even though ceilometer is a soft
dependency of heat for autoscaling.

Many Operators self-deploy so they understand how the system operates.
They lack the ~200 contributors Kolla has to maintain a deployment tool,
and as such, I really don't think the idea that deploying "Y to get X when
Y could and should be a small footprint library" is unreasonable.

Regards,
-steve

>
>Thanks,
>Kevin
>
>From: Douglas Mendizábal [douglas.mendiza...@rackspace.com]
>Sent: Friday, March 18, 2016 6:45 AM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [magnum] High Availability
>
>Hongbin,
>
>I think Adrian makes some excellent points regarding the adoption of
>Barbican.  As the PTL for Barbican, it's frustrating to me to constantly
>hear from other projects that securing their sensitive data is a
>requirement but then turn around and say that deploying Barbican is a
>problem.
>
>I guess I'm having a hard time understanding the operator persona that
>is willing to deploy new services with security features but unwilling
>to also deploy the service that is meant to secure sensitive data across
>all of OpenStack.
>
>I understand one barrier to entry for Barbican is the high cost of
>Hardware Security Modules, which we recommend as the best option for the
>Storage and Crypto backends for Barbican.  But there are also other
>options for securing Barbican using open source software like DogTag or
>SoftHSM.
>
>I also expect Barbican adoption to increase in the future, and I was
>hoping that Magnum would help drive that adoption.  There are also other
>projects that are actively developing security features like Swift
>Encryption, and DNSSEC support in Desginate.  Eventually these features
>will also require Barbican, so I agree with Adrian that we as a
>community should be encouraging deployers to adopt the best security
>practices.
>
>Regarding the Keystone solution, I'd like to hear the Keystone team's
>feadback on that.  It definitely sounds to me like you're trying to put
>a square peg in a round hole.
>
>- Doug
>
>On 3/17/16 8:45 PM, Hongbin Lu wrote:
>> Thanks Adrian,
>>
>>
>>
>> I think the Keystone approach will work. For others, please speak up if
>> it doesn¹t work for you.
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
>> *Sent:* March-17-16 9:28 PM
&

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Daneyon Hansen (danehans)
emented in Magnum, which may depend on libraries, but not Barbican.
>>> 
>>> 
>>> [2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html
>> 
>> The context there is important. Barbican was considered for two purposes: 
>> (1) CA signing capability, and (2) certificate storage. My willingness to 
>> implement an alternative was based on our need to get a certificate 
>> generation and signing solution that actually worked, as Barbican did not 
>> work for that at the time. I have always viewed Barbican as a suitable 
>> solution for certificate storage, as that was what it was first designed 
>> for. Since then, we have implemented certificate generation and signing 
>> logic within a library that does not depend on Barbican, and we can use that 
>> safely in production use cases. What we don’t have built in is what Barbican 
>> is best at, secure storage for our certificates that will allow 
>> multi-conductor operation.
>> 
>> I am opposed to the idea that Magnum should re-implement Barbican for 
>> certificate storage just because operators are reluctant to adopt it. If we 
>> need to ship a Barbican instance along with each Magnum control plane, so be 
>> it, but I don’t see the value in re-inventing the wheel. I promised the 
>> OpenStack community that we were out to integrate with and enhance OpenStack 
>> not to replace it.
>> 
>> Now, with all that said, I do recognize that not all clouds are motivated to 
>> use all available security best practices. They may be operating in 
>> environments that they believe are already secure (because of a secure 
>> perimeter), and that it’s okay to run fundamentally insecure software within 
>> those environments. As misguided as this viewpoint may be, it’s common. My 
>> belief is that it’s best to offer the best practice by default, and only 
>> allow insecure operation when someone deliberately turns of fundamental 
>> security features.
>> 
>> With all this said, I also care about Magnum adoption as much as all of us, 
>> so I’d like us to think creatively about how to strike the right balance 
>> between re-implementing existing technology, and making that technology 
>> easily accessible.
>> 
>> Thanks,
>> 
>> Adrian
>> 
>>> 
>>> Best regards,
>>> Hongbin
>>> 
>>> -Original Message-
>>> From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
>>> Sent: March-17-16 4:32 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [magnum] High Availability
>>> 
>>> I have trouble understanding that blueprint. I will put some remarks on the 
>>> whiteboard. Duplicating Barbican sounds like a mistake to me.
>>> 
>>> --
>>> Adrian
>>> 
>>>> On Mar 17, 2016, at 12:01 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
>>>> 
>>>> The problem of missing Barbican alternative implementation has been raised 
>>>> several times by different people. IMO, this is a very serious issue that 
>>>> will hurt Magnum adoption. I created a blueprint for that [1] and set the 
>>>> PTL as approver. It will be picked up by a contributor once it is approved.
>>>> 
>>>> [1] 
>>>> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
>>>> re
>>>> 
>>>> Best regards,
>>>> Hongbin
>>>> 
>>>> -Original Message-
>>>> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
>>>> Sent: March-17-16 2:39 PM
>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>> Subject: Re: [openstack-dev] [magnum] High Availability
>>>> 
>>>> Hi.
>>>> 
>>>> We're on the way, the API is using haproxy load balancing in the same way 
>>>> all openstack services do here - this part seems to work fine.
>>>> 
>>>> For the conductor we're stopped due to bay certificates - we don't 
>>>> currently have barbican so local was the only option. To get them 
>>>> accessible on all nodes we're considering two options:
>>>> - store bay certs in a shared filesystem, meaning a new set of 
>>>> credentials in the boxes (and a process to renew fs tokens)
>>>> - deploy barbican (some bits of puppet missing we're sorting out)
>>>> 
>>>> More 

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Daneyon Hansen (danehans)

Aside from the bay certificates/Barbican issue. Is anyone aware of any other 
potential problems for high-availability, especially for Conductor?

Regards,
Daneyon Hansen

> On Mar 17, 2016, at 12:03 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
> 
> The problem of missing Barbican alternative implementation has been raised 
> several times by different people. IMO, this is a very serious issue that 
> will hurt Magnum adoption. I created a blueprint for that [1] and set the PTL 
> as approver. It will be picked up by a contributor once it is approved.
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store 
> 
> Best regards,
> Hongbin
> 
> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
> Sent: March-17-16 2:39 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hi.
> 
> We're on the way, the API is using haproxy load balancing in the same way all 
> openstack services do here - this part seems to work fine.
> 
> For the conductor we're stopped due to bay certificates - we don't currently 
> have barbican so local was the only option. To get them accessible on all 
> nodes we're considering two options:
> - store bay certs in a shared filesystem, meaning a new set of credentials in 
> the boxes (and a process to renew fs tokens)
> - deploy barbican (some bits of puppet missing we're sorting out)
> 
> More news next week.
> 
> Cheers,
> Ricardo
> 
>> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) 
>> <daneh...@cisco.com> wrote:
>> All,
>> 
>> Does anyone have experience deploying Magnum in a highly-available fashion?
>> If so, I’m interested in learning from your experience. My biggest 
>> unknown is the Conductor service. Any insight you can provide is 
>> greatly appreciated.
>> 
>> Regards,
>> Daneyon Hansen
>> 
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Fox, Kevin M
+1. We should be encouraging a common way of solving these issues across all 
the openstack projects and security is a really important thing. spreading it 
across lots of projects causes more bugs and security related bugs cause 
security incidents. No one wants those.

I'd also like to know why, if an old cloud is willing to deploy a new magnum, 
its unreasonable to deploy a new barbican at the same time.

If its a technical reason, lets fix the issue. If its something else, lets 
discuss it. If its just an operator not wanting to install 2 things instead of 
just one, I think its a totally understandable, but unreasonable request.

Thanks,
Kevin

From: Douglas Mendizábal [douglas.mendiza...@rackspace.com]
Sent: Friday, March 18, 2016 6:45 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] High Availability

Hongbin,

I think Adrian makes some excellent points regarding the adoption of
Barbican.  As the PTL for Barbican, it's frustrating to me to constantly
hear from other projects that securing their sensitive data is a
requirement but then turn around and say that deploying Barbican is a
problem.

I guess I'm having a hard time understanding the operator persona that
is willing to deploy new services with security features but unwilling
to also deploy the service that is meant to secure sensitive data across
all of OpenStack.

I understand one barrier to entry for Barbican is the high cost of
Hardware Security Modules, which we recommend as the best option for the
Storage and Crypto backends for Barbican.  But there are also other
options for securing Barbican using open source software like DogTag or
SoftHSM.

I also expect Barbican adoption to increase in the future, and I was
hoping that Magnum would help drive that adoption.  There are also other
projects that are actively developing security features like Swift
Encryption, and DNSSEC support in Desginate.  Eventually these features
will also require Barbican, so I agree with Adrian that we as a
community should be encouraging deployers to adopt the best security
practices.

Regarding the Keystone solution, I'd like to hear the Keystone team's
feadback on that.  It definitely sounds to me like you're trying to put
a square peg in a round hole.

- Doug

On 3/17/16 8:45 PM, Hongbin Lu wrote:
> Thanks Adrian,
>
>
>
> I think the Keystone approach will work. For others, please speak up if
> it doesn’t work for you.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
> *Sent:* March-17-16 9:28 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] High Availability
>
>
>
> Hongbin,
>
>
>
> I tweaked the blueprint in accordance with this approach, and approved
> it for Newton:
>
> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
>
>
>
> I think this is something we can all agree on as a middle ground, If
> not, I’m open to revisiting the discussion.
>
>
>
> Thanks,
>
>
>
> Adrian
>
>
>
> On Mar 17, 2016, at 6:13 PM, Adrian Otto <adrian.o...@rackspace.com
> <mailto:adrian.o...@rackspace.com>> wrote:
>
>
>
> Hongbin,
>
> One alternative we could discuss as an option for operators that
> have a good reason not to use Barbican, is to use Keystone.
>
> Keystone credentials store:
> 
> http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#credentials-v3-credentials
>
> The contents are stored in plain text in the Keystone DB, so we
> would want to generate an encryption key per bay, encrypt the
> certificate and store it in keystone. We would then use the same key
> to decrypt it upon reading the key back. This might be an acceptable
> middle ground for clouds that will not or can not run Barbican. This
> should work for any OpenStack cloud since Grizzly. The total amount
> of code in Magnum would be small, as the API already exists. We
> would need a library function to encrypt and decrypt the data, and
> ideally a way to select different encryption algorithms in case one
> is judged weak at some point in the future, justifying the use of an
> alternate.
>
> Adrian
>
>
> On Mar 17, 2016, at 4:55 PM, Adrian Otto <adrian.o...@rackspace.com
> <mailto:adrian.o...@rackspace.com>> wrote:
>
> Hongbin,
>
>
> On Mar 17, 2016, at 2:25 PM, Hongbin Lu <hongbin...@huawei.com
> <mailto:hongbin...@huawei.com>> wrote:
>
> Adrian,
>
> I think we need a boarder set of inputs in this matter, so I moved
> the discussion from whiteboard back to here

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Douglas Mendizábal
Hongbin,

I'm looking forward to discussing this further at the Austin summit.
I'm very interested in learning more about the negative feedback you're
getting regarding Barbican, so that our team can help alleviate those
concerns where possible.

Thanks,
- Douglas

On 3/18/16 10:18 AM, Hongbin Lu wrote:
> Douglas,
> 
> I am not opposed to adopt Barbican in Magnum (In fact, we already adopted 
> Barbican). What I am opposed to is a Barbican lock-in, which already has a 
> negative impact on Magnum adoption based on our feedback. I also want to see 
> an increase of Barbican adoption in the future, and all our users have 
> Barbican installed in their clouds. If that happens, I have no problem to 
> have a hard dependency on Barbican.
> 
> Best regards,
> Hongbin
> 
> -Original Message-
> From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com] 
> Sent: March-18-16 9:45 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hongbin,
> 
> I think Adrian makes some excellent points regarding the adoption of 
> Barbican.  As the PTL for Barbican, it's frustrating to me to constantly hear 
> from other projects that securing their sensitive data is a requirement but 
> then turn around and say that deploying Barbican is a problem.
> 
> I guess I'm having a hard time understanding the operator persona that is 
> willing to deploy new services with security features but unwilling to also 
> deploy the service that is meant to secure sensitive data across all of 
> OpenStack.
> 
> I understand one barrier to entry for Barbican is the high cost of Hardware 
> Security Modules, which we recommend as the best option for the Storage and 
> Crypto backends for Barbican.  But there are also other options for securing 
> Barbican using open source software like DogTag or SoftHSM.
> 
> I also expect Barbican adoption to increase in the future, and I was hoping 
> that Magnum would help drive that adoption.  There are also other projects 
> that are actively developing security features like Swift Encryption, and 
> DNSSEC support in Desginate.  Eventually these features will also require 
> Barbican, so I agree with Adrian that we as a community should be encouraging 
> deployers to adopt the best security practices.
> 
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
> 
> - Doug
> 
> On 3/17/16 8:45 PM, Hongbin Lu wrote:
>> Thanks Adrian,
>>
>>  
>>
>> I think the Keystone approach will work. For others, please speak up 
>> if it doesn't work for you.
>>
>>  
>>
>> Best regards,
>>
>> Hongbin
>>
>>  
>>
>> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
>> *Sent:* March-17-16 9:28 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [magnum] High Availability
>>
>>  
>>
>> Hongbin,
>>
>>  
>>
>> I tweaked the blueprint in accordance with this approach, and approved 
>> it for Newton:
>>
>> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
>> re
>>
>>  
>>
>> I think this is something we can all agree on as a middle ground, If 
>> not, I'm open to revisiting the discussion.
>>
>>  
>>
>> Thanks,
>>
>>  
>>
>> Adrian
>>
>>  
>>
>> On Mar 17, 2016, at 6:13 PM, Adrian Otto <adrian.o...@rackspace.com
>> <mailto:adrian.o...@rackspace.com>> wrote:
>>
>>  
>>
>> Hongbin,
>>
>> One alternative we could discuss as an option for operators that
>> have a good reason not to use Barbican, is to use Keystone.
>>
>> Keystone credentials store:
>> 
>> http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-ap
>> i-v3.html#credentials-v3-credentials
>>
>> The contents are stored in plain text in the Keystone DB, so we
>> would want to generate an encryption key per bay, encrypt the
>> certificate and store it in keystone. We would then use the same key
>> to decrypt it upon reading the key back. This might be an acceptable
>> middle ground for clouds that will not or can not run Barbican. This
>> should work for any OpenStack cloud since Grizzly. The total amount
>> of code in Magnum would be small, as the API already exists. We
>> would need a library function to encrypt and decrypt the data, and
>> ide

[openstack-dev] [magnum] High Availability

2016-03-19 Thread Daneyon Hansen (danehans)
All,

Does anyone have experience deploying Magnum in a highly-available fashion? If 
so, I'm interested in learning from your experience. My biggest unknown is the 
Conductor service. Any insight you can provide is greatly appreciated.

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Douglas Mendizábal
Hongbin,

I think Adrian makes some excellent points regarding the adoption of
Barbican.  As the PTL for Barbican, it's frustrating to me to constantly
hear from other projects that securing their sensitive data is a
requirement but then turn around and say that deploying Barbican is a
problem.

I guess I'm having a hard time understanding the operator persona that
is willing to deploy new services with security features but unwilling
to also deploy the service that is meant to secure sensitive data across
all of OpenStack.

I understand one barrier to entry for Barbican is the high cost of
Hardware Security Modules, which we recommend as the best option for the
Storage and Crypto backends for Barbican.  But there are also other
options for securing Barbican using open source software like DogTag or
SoftHSM.

I also expect Barbican adoption to increase in the future, and I was
hoping that Magnum would help drive that adoption.  There are also other
projects that are actively developing security features like Swift
Encryption, and DNSSEC support in Desginate.  Eventually these features
will also require Barbican, so I agree with Adrian that we as a
community should be encouraging deployers to adopt the best security
practices.

Regarding the Keystone solution, I'd like to hear the Keystone team's
feadback on that.  It definitely sounds to me like you're trying to put
a square peg in a round hole.

- Doug

On 3/17/16 8:45 PM, Hongbin Lu wrote:
> Thanks Adrian,
> 
>  
> 
> I think the Keystone approach will work. For others, please speak up if
> it doesn’t work for you.
> 
>  
> 
> Best regards,
> 
> Hongbin
> 
>  
> 
> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
> *Sent:* March-17-16 9:28 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] High Availability
> 
>  
> 
> Hongbin,
> 
>  
> 
> I tweaked the blueprint in accordance with this approach, and approved
> it for Newton:
> 
> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
> 
>  
> 
> I think this is something we can all agree on as a middle ground, If
> not, I’m open to revisiting the discussion.
> 
>  
> 
> Thanks,
> 
>  
> 
> Adrian
> 
>  
> 
> On Mar 17, 2016, at 6:13 PM, Adrian Otto <adrian.o...@rackspace.com
> <mailto:adrian.o...@rackspace.com>> wrote:
> 
>  
> 
> Hongbin,
> 
> One alternative we could discuss as an option for operators that
> have a good reason not to use Barbican, is to use Keystone.
> 
> Keystone credentials store:
> 
> http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#credentials-v3-credentials
> 
> The contents are stored in plain text in the Keystone DB, so we
> would want to generate an encryption key per bay, encrypt the
> certificate and store it in keystone. We would then use the same key
> to decrypt it upon reading the key back. This might be an acceptable
> middle ground for clouds that will not or can not run Barbican. This
> should work for any OpenStack cloud since Grizzly. The total amount
> of code in Magnum would be small, as the API already exists. We
> would need a library function to encrypt and decrypt the data, and
> ideally a way to select different encryption algorithms in case one
> is judged weak at some point in the future, justifying the use of an
> alternate.
> 
> Adrian
> 
> 
> On Mar 17, 2016, at 4:55 PM, Adrian Otto <adrian.o...@rackspace.com
> <mailto:adrian.o...@rackspace.com>> wrote:
> 
> Hongbin,
> 
> 
> On Mar 17, 2016, at 2:25 PM, Hongbin Lu <hongbin...@huawei.com
> <mailto:hongbin...@huawei.com>> wrote:
> 
> Adrian,
> 
> I think we need a boarder set of inputs in this matter, so I moved
> the discussion from whiteboard back to here. Please check my replies
> inline.
> 
> 
> I would like to get a clear problem statement written for this.
> As I see it, the problem is that there is no safe place to put
> certificates in clouds that do not run Barbican.
> It seems the solution is to make it easy to add Barbican such that
> it's included in the setup for Magnum.
> 
> No, the solution is to explore an non-Barbican solution to store
> certificates securely.
> 
> 
> I am seeking more clarity about why a non-Barbican solution is
> desired. Why is there resistance to adopting both Magnum and
> Barbican together? I think the answer is that people think they can
> make Magnum work with really old clouds that were set up before
> Barbican was introduced. That expect

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Maish Saidel-Keesing
Forgive me for the top post and also for asking the obvious (with my
Operator hat on)

Relying on an external service for certificate store - is the best
option - assuming of course that the certificate store is actually also
highly available.

Is that the case today with Barbican?

According to the architecture docs [1] I see that they are using a
relational database. MySQL? PostgreSQL? Does that now mean we have an
additional database to maintain, backup, provide HA for as an Operator?

The only real reference I can see to anything remotely HA is this [2]
and this [3]

An overall solution is highly available *only* if all of the parts it
relies are also highly available as well.


[1]
http://docs.openstack.org/developer/barbican/contribute/architecture.html#overall-architecture
[2] https://github.com/cloudkeep-ops/barbican-vagrant-zero
[3] http://lists.openstack.org/pipermail/openstack/2014-March/006100.html

Some food for thought

-- 
Best Regards,
Maish Saidel-Keesing


On 03/18/16 17:18, Hongbin Lu wrote:
> Douglas,
>
> I am not opposed to adopt Barbican in Magnum (In fact, we already adopted 
> Barbican). What I am opposed to is a Barbican lock-in, which already has a 
> negative impact on Magnum adoption based on our feedback. I also want to see 
> an increase of Barbican adoption in the future, and all our users have 
> Barbican installed in their clouds. If that happens, I have no problem to 
> have a hard dependency on Barbican.
>
> Best regards,
> Hongbin
>
> -Original Message-
> From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com] 
> Sent: March-18-16 9:45 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum] High Availability
>
> Hongbin,
>
> I think Adrian makes some excellent points regarding the adoption of 
> Barbican.  As the PTL for Barbican, it's frustrating to me to constantly hear 
> from other projects that securing their sensitive data is a requirement but 
> then turn around and say that deploying Barbican is a problem.
>
> I guess I'm having a hard time understanding the operator persona that is 
> willing to deploy new services with security features but unwilling to also 
> deploy the service that is meant to secure sensitive data across all of 
> OpenStack.
>
> I understand one barrier to entry for Barbican is the high cost of Hardware 
> Security Modules, which we recommend as the best option for the Storage and 
> Crypto backends for Barbican.  But there are also other options for securing 
> Barbican using open source software like DogTag or SoftHSM.
>
> I also expect Barbican adoption to increase in the future, and I was hoping 
> that Magnum would help drive that adoption.  There are also other projects 
> that are actively developing security features like Swift Encryption, and 
> DNSSEC support in Desginate.  Eventually these features will also require 
> Barbican, so I agree with Adrian that we as a community should be encouraging 
> deployers to adopt the best security practices.
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>
> - Doug
>
> On 3/17/16 8:45 PM, Hongbin Lu wrote:
>> Thanks Adrian,
>>
>>  
>>
>> I think the Keystone approach will work. For others, please speak up 
>> if it doesn't work for you.
>>
>>  
>>
>> Best regards,
>>
>> Hongbin
>>
>>  
>>
>> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
>> *Sent:* March-17-16 9:28 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [magnum] High Availability
>>
>>  
>>
>> Hongbin,
>>
>>  
>>
>> I tweaked the blueprint in accordance with this approach, and approved 
>> it for Newton:
>>
>> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
>> re
>>
>>  
>>
>> I think this is something we can all agree on as a middle ground, If 
>> not, I'm open to revisiting the discussion.
>>
>>  
>>
>> Thanks,
>>
>>  
>>
>> Adrian
>>
>>  
>>
>> On Mar 17, 2016, at 6:13 PM, Adrian Otto <adrian.o...@rackspace.com
>> <mailto:adrian.o...@rackspace.com>> wrote:
>>
>>  
>>
>> Hongbin,
>>
>> One alternative we could discuss as an option for operators that
>> have a good reason not to use Barbican, is to use Keystone.
>>
>> Keystone credentials store:
>> 
>> http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Clark, Robert Graham
I thought that a big part of the use case with Magnum + Barbican was 
Certificate management for Bays?

-Rob

From: "Dave McCowan (dmccowan)"
Reply-To: OpenStack List
Date: Saturday, 19 March 2016 14:56
To: OpenStack List
Subject: Re: [openstack-dev] [magnum] High Availability


The most basic requirement here for Magnum is that it needs a safe place to 
store credentials.  A safe place can not be provided by just a library or even 
by just a daemon.  Secure storage is provided by either hardware solution (an 
HSM) or a software solution (SoftHSM, DogTag, IPA, IdM).  A project should give 
a variety of secure storage options to the user.

On this, we have competing requirements.  Devs need a turnkey option for easy 
testing locally or in the gate.  Users kicking the tires want a realistic 
solution they try out easily with DevStack.  Operators who already have secure 
storage deployed for their cloud want an option that plugs into their existing 
HSMs.

Any roll-your-own option is not going to meet all of these requirements.

A good example, that does meet all of these requirements, is the key manager 
implementation in Nova and Cinder. [1] [2]

Nova and Cinder work together to provide volume encryption, and like Magnum, 
have a need to store and share keys securely.  Using a plugin architecture, and 
the Barbican API, they implement a variety of key storage options:
- Fixed key allows for insecure stand alone operation, running only Nova and 
Cinder
- Barbican with static key, allows for easy deployment that can be started 
within DevStack by few lines of config.
- Barbican with a secure backend, allows for production grade secure storage of 
keys that has been tested on a variety of HSMs and software options.

Barbican's adoption is growing.  Nova, Cinder, Neutron LBaaS, Sahara, and 
Magnum all have implementations using Barbican.  Swift and DNSSec also have use 
cases.  There are both RPM and Debian packages available for Barbican.  There 
are (at least tech preview)  versions of puppet modules, Ansible playbooks, and 
DevStack plugins to deploy Barbican.

In summary, I think using Barbican absorbs the complexity of doing secure 
storage correctly.  It gives operators production grade secure storage options, 
while giving devs easier options.

--Dave McCowan

[1] https://github.com/openstack/nova/tree/master/nova/keymgr
[2] https://github.com/openstack/cinder/tree/master/cinder/keymgr

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, March 18, 2016 at 10:52 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

OK. If using Keystone is not acceptable, I am going to propose a new approach:

· Store data in Magnum DB

· Encrypt data before writing it to DB

· Decrypt data after loading it from DB

· Have the encryption/decryption key stored in config file

· Use encryption/decryption algorithm provided by a library

The approach above is the exact approach used by Heat to protect hidden 
parameters [1]. Compared to the Barbican option, this approach is much lighter 
and simpler, and provides a basic level of data protection. This option is a 
good supplement to the Barbican option, which is heavy but provides advanced 
level of protection. It will fit into the use cases that users don’t want to 
install Barbican but want a basic protection.

If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.

If you don’t like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
<douglas.mendiza...@rackspace.com<mailto:douglas.mendiza...@rackspace.com>> 
wrote:
[snip]
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>

I believe that using Keystone for this is a mistake. As mentioned in the 
blueprint, Keystone is not encrypting the data so magnum would be on the hook 
to do it. So that means that if security is a

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Dave McCowan (dmccowan)

The most basic requirement here for Magnum is that it needs a safe place to 
store credentials.  A safe place can not be provided by just a library or even 
by just a daemon.  Secure storage is provided by either hardware solution (an 
HSM) or a software solution (SoftHSM, DogTag, IPA, IdM).  A project should give 
a variety of secure storage options to the user.

On this, we have competing requirements.  Devs need a turnkey option for easy 
testing locally or in the gate.  Users kicking the tires want a realistic 
solution they try out easily with DevStack.  Operators who already have secure 
storage deployed for their cloud want an option that plugs into their existing 
HSMs.

Any roll-your-own option is not going to meet all of these requirements.

A good example, that does meet all of these requirements, is the key manager 
implementation in Nova and Cinder. [1] [2]

Nova and Cinder work together to provide volume encryption, and like Magnum, 
have a need to store and share keys securely.  Using a plugin architecture, and 
the Barbican API, they implement a variety of key storage options:
- Fixed key allows for insecure stand alone operation, running only Nova and 
Cinder
- Barbican with static key, allows for easy deployment that can be started 
within DevStack by few lines of config.
- Barbican with a secure backend, allows for production grade secure storage of 
keys that has been tested on a variety of HSMs and software options.

Barbican's adoption is growing.  Nova, Cinder, Neutron LBaaS, Sahara, and 
Magnum all have implementations using Barbican.  Swift and DNSSec also have use 
cases.  There are both RPM and Debian packages available for Barbican.  There 
are (at least tech preview)  versions of puppet modules, Ansible playbooks, and 
DevStack plugins to deploy Barbican.

In summary, I think using Barbican absorbs the complexity of doing secure 
storage correctly.  It gives operators production grade secure storage options, 
while giving devs easier options.

--Dave McCowan

[1] https://github.com/openstack/nova/tree/master/nova/keymgr
[2] https://github.com/openstack/cinder/tree/master/cinder/keymgr

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, March 18, 2016 at 10:52 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

OK. If using Keystone is not acceptable, I am going to propose a new approach:

· Store data in Magnum DB

· Encrypt data before writing it to DB

· Decrypt data after loading it from DB

· Have the encryption/decryption key stored in config file

· Use encryption/decryption algorithm provided by a library

The approach above is the exact approach used by Heat to protect hidden 
parameters [1]. Compared to the Barbican option, this approach is much lighter 
and simpler, and provides a basic level of data protection. This option is a 
good supplement to the Barbican option, which is heavy but provides advanced 
level of protection. It will fit into the use cases that users don't want to 
install Barbican but want a basic protection.

If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.

If you don't like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
<douglas.mendiza...@rackspace.com<mailto:douglas.mendiza...@rackspace.com>> 
wrote:
[snip]
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>

I believe that using Keystone for this is a mistake. As mentioned in the 
blueprint, Keystone is not encrypting the data so magnum would be on the hook 
to do it. So that means that if security is a requirement you'd have to 
duplicate more than just code. magnum would start having a larger security 
burden. Since we have a system designed to securely store data I think that's 
the best place for data that needs to be secure.

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Ricardo Rocha
 only. In 
>>>> other words, this option is not for production. As a result, Barbican 
>>>> becomes the only option for production which is the root of the problem. 
>>>> It basically forces everyone to install Barbican in order to use Magnum.
>>>>
>>>> [1] https://review.openstack.org/#/c/212395/
>>>>
>>>>> It's probably a bad idea to replicate them.
>>>>> That's what Barbican is for. --adrian_otto
>>>> Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
>>>> agreed to have two phases of implementation and the statement was made by 
>>>> you [2].
>>>>
>>>> 
>>>> #agreed Magnum will use Barbican for an initial implementation for 
>>>> certificate generation and secure storage/retrieval.  We will commit to a 
>>>> second phase of development to eliminating the hard requirement on 
>>>> Barbican with an alternate implementation that implements the functional 
>>>> equivalent implemented in Magnum, which may depend on libraries, but not 
>>>> Barbican.
>>>> 
>>>>
>>>> [2] 
>>>> http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html
>>>
>>> The context there is important. Barbican was considered for two purposes: 
>>> (1) CA signing capability, and (2) certificate storage. My willingness to 
>>> implement an alternative was based on our need to get a certificate 
>>> generation and signing solution that actually worked, as Barbican did not 
>>> work for that at the time. I have always viewed Barbican as a suitable 
>>> solution for certificate storage, as that was what it was first designed 
>>> for. Since then, we have implemented certificate generation and signing 
>>> logic within a library that does not depend on Barbican, and we can use 
>>> that safely in production use cases. What we don’t have built in is what 
>>> Barbican is best at, secure storage for our certificates that will allow 
>>> multi-conductor operation.
>>>
>>> I am opposed to the idea that Magnum should re-implement Barbican for 
>>> certificate storage just because operators are reluctant to adopt it. If we 
>>> need to ship a Barbican instance along with each Magnum control plane, so 
>>> be it, but I don’t see the value in re-inventing the wheel. I promised the 
>>> OpenStack community that we were out to integrate with and enhance 
>>> OpenStack not to replace it.
>>>
>>> Now, with all that said, I do recognize that not all clouds are motivated 
>>> to use all available security best practices. They may be operating in 
>>> environments that they believe are already secure (because of a secure 
>>> perimeter), and that it’s okay to run fundamentally insecure software 
>>> within those environments. As misguided as this viewpoint may be, it’s 
>>> common. My belief is that it’s best to offer the best practice by default, 
>>> and only allow insecure operation when someone deliberately turns of 
>>> fundamental security features.
>>>
>>> With all this said, I also care about Magnum adoption as much as all of us, 
>>> so I’d like us to think creatively about how to strike the right balance 
>>> between re-implementing existing technology, and making that technology 
>>> easily accessible.
>>>
>>> Thanks,
>>>
>>> Adrian
>>>
>>>>
>>>> Best regards,
>>>> Hongbin
>>>>
>>>> -Original Message-
>>>> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
>>>> Sent: March-17-16 4:32 PM
>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>> Subject: Re: [openstack-dev] [magnum] High Availability
>>>>
>>>> I have trouble understanding that blueprint. I will put some remarks on 
>>>> the whiteboard. Duplicating Barbican sounds like a mistake to me.
>>>>
>>>> --
>>>> Adrian
>>>>
>>>>> On Mar 17, 2016, at 12:01 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
>>>>>
>>>>> The problem of missing Barbican alternative implementation has been 
>>>>> raised several times by different people. IMO, this is a very serious 
>>>>> issue that will hurt Magnum adoption. I created a blueprint

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Adrian Otto
use cases. 
What we don’t have built in is what Barbican is best at, secure storage for our 
certificates that will allow multi-conductor operation.

I am opposed to the idea that Magnum should re-implement Barbican for 
certificate storage just because operators are reluctant to adopt it. If we 
need to ship a Barbican instance along with each Magnum control plane, so be 
it, but I don’t see the value in re-inventing the wheel. I promised the 
OpenStack community that we were out to integrate with and enhance OpenStack 
not to replace it.

Now, with all that said, I do recognize that not all clouds are motivated to 
use all available security best practices. They may be operating in 
environments that they believe are already secure (because of a secure 
perimeter), and that it’s okay to run fundamentally insecure software within 
those environments. As misguided as this viewpoint may be, it’s common. My 
belief is that it’s best to offer the best practice by default, and only allow 
insecure operation when someone deliberately turns of fundamental security 
features.

With all this said, I also care about Magnum adoption as much as all of us, so 
I’d like us to think creatively about how to strike the right balance between 
re-implementing existing technology, and making that technology easily 
accessible.

Thanks,

Adrian


Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-17-16 4:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

I have trouble understanding that blueprint. I will put some remarks on the 
whiteboard. Duplicating Barbican sounds like a mistake to me.

--
Adrian

On Mar 17, 2016, at 12:01 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

The problem of missing Barbican alternative implementation has been raised 
several times by different people. IMO, this is a very serious issue that will 
hurt Magnum adoption. I created a blueprint for that [1] and set the PTL as 
approver. It will be picked up by a contributor once it is approved.

[1]
https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
re

Best regards,
Hongbin

-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
Sent: March-17-16 2:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

Hi.

We're on the way, the API is using haproxy load balancing in the same way all 
openstack services do here - this part seems to work fine.

For the conductor we're stopped due to bay certificates - we don't currently 
have barbican so local was the only option. To get them accessible on all nodes 
we're considering two options:
- store bay certs in a shared filesystem, meaning a new set of
credentials in the boxes (and a process to renew fs tokens)
- deploy barbican (some bits of puppet missing we're sorting out)

More news next week.

Cheers,
Ricardo

On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) <daneh...@cisco.com> 
wrote:
All,

Does anyone have experience deploying Magnum in a highly-available fashion?
If so, I'm interested in learning from your experience. My biggest
unknown is the Conductor service. Any insight you can provide is
greatly appreciated.

Regards,
Daneyon Hansen

_
_  OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questi

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Ian Cordasco
 

-Original Message-
From: Hongbin Lu <hongbin...@huawei.com>
Reply: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Date: March 17, 2016 at 20:48:59
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject:  Re: [openstack-dev] [magnum] High Availability

> Thanks Adrian,
>  
> I think the Keystone approach will work. For others, please speak up if it 
> doesn’t work  
> for you.

So I think we need to clear out some assumptions before declaring that it will 
work.

First, we have the assumption that people not wanting to deploy Magnum with 
Barbican are using an older OpenStack version on the whole. Let's assume that's 
true. You're now choosing to depend on a Keystone v3 feature that should have 
been supported in Grizzly. This assumes a few things:

- Operators already had v3 turned on in Grizzly (which is *highly* unlikely)
- The API feature didn't have show stopping bugs back then

Will Magnum now start rigorously testing the integration between this feature 
and magnum at the gate dating all the way back to Grizzly so these (supposed) 
operators (none of whom have stepped forth in this discussion and apparently do 
not include Ricardo) can be certain their data won't be lost?

Further, how will this affect any future acceptance into the 
vulnerability-managed tag? Magnum would need a full security audit and I'm 
certain this particular feature will set off several red flags. And given that 
few contributors to magnum seem to have the expertise with this kind of work, I 
have little confidence in anyone relying on this in production. It will likely 
be far less secure or trustworthy than deploying Barbican.

I'd also like to challenge the idea of doing something unless it doesn't work 
for someone. That shouldn't be the barrier for acceptance in an OpenStack 
project and especially not in Magnum. You're introducing several points of 
failure for security. You're potentially harming the future of Magnum (by 
excluding it from the VMT until this code is fixed/removed). You're solving for 
a demographic that doesn't seem to be represented here.

This feature needs far more justification in the way of *real* user stories for 
magnum where they cannot or will not deploy Barbican.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Tim Bell

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Saturday 19 March 2016 at 04:52
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

...
If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.


There is a risk that we use decisions made by other projects to justify how 
Magnum is implemented. Heat was created 3 years ago according to 
https://www.openstack.org/software/project-navigator/ and Barbican only 2 years 
ago, thus Barbican may not have been an option (or a high risk one).

Barbican has demonstrated that the project has corporate diversity and good 
stability 
(https://www.openstack.org/software/releases/liberty/components/barbican). 
There are some areas that could be improved (packaging and puppet modules are 
often needing some more investment).

I think it is worth a go to try it out and have concrete areas to improve if 
there are problems.

Tim

If you don’t like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
<douglas.mendiza...@rackspace.com<mailto:douglas.mendiza...@rackspace.com>> 
wrote:
[snip]
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>

I believe that using Keystone for this is a mistake. As mentioned in the 
blueprint, Keystone is not encrypting the data so magnum would be on the hook 
to do it. So that means that if security is a requirement you'd have to 
duplicate more than just code. magnum would start having a larger security 
burden. Since we have a system designed to securely store data I think that's 
the best place for data that needs to be secure.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Adrian Otto
hat does not depend on Barbican, and we can use that safely 
> in production use cases. What we don’t have built in is what Barbican is best 
> at, secure storage for our certificates that will allow multi-conductor 
> operation.
> 
> I am opposed to the idea that Magnum should re-implement Barbican for 
> certificate storage just because operators are reluctant to adopt it. If we 
> need to ship a Barbican instance along with each Magnum control plane, so be 
> it, but I don’t see the value in re-inventing the wheel. I promised the 
> OpenStack community that we were out to integrate with and enhance OpenStack 
> not to replace it.
> 
> Now, with all that said, I do recognize that not all clouds are motivated to 
> use all available security best practices. They may be operating in 
> environments that they believe are already secure (because of a secure 
> perimeter), and that it’s okay to run fundamentally insecure software within 
> those environments. As misguided as this viewpoint may be, it’s common. My 
> belief is that it’s best to offer the best practice by default, and only 
> allow insecure operation when someone deliberately turns of fundamental 
> security features.
> 
> With all this said, I also care about Magnum adoption as much as all of us, 
> so I’d like us to think creatively about how to strike the right balance 
> between re-implementing existing technology, and making that technology 
> easily accessible.
> 
> Thanks,
> 
> Adrian
> 
>> 
>> Best regards,
>> Hongbin
>> 
>> -Original Message-
>> From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
>> Sent: March-17-16 4:32 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum] High Availability
>> 
>> I have trouble understanding that blueprint. I will put some remarks on the 
>> whiteboard. Duplicating Barbican sounds like a mistake to me.
>> 
>> --
>> Adrian
>> 
>>> On Mar 17, 2016, at 12:01 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
>>> 
>>> The problem of missing Barbican alternative implementation has been raised 
>>> several times by different people. IMO, this is a very serious issue that 
>>> will hurt Magnum adoption. I created a blueprint for that [1] and set the 
>>> PTL as approver. It will be picked up by a contributor once it is approved.
>>> 
>>> [1] 
>>> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
>>> re
>>> 
>>> Best regards,
>>> Hongbin
>>> 
>>> -Original Message-
>>> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
>>> Sent: March-17-16 2:39 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [magnum] High Availability
>>> 
>>> Hi.
>>> 
>>> We're on the way, the API is using haproxy load balancing in the same way 
>>> all openstack services do here - this part seems to work fine.
>>> 
>>> For the conductor we're stopped due to bay certificates - we don't 
>>> currently have barbican so local was the only option. To get them 
>>> accessible on all nodes we're considering two options:
>>> - store bay certs in a shared filesystem, meaning a new set of 
>>> credentials in the boxes (and a process to renew fs tokens)
>>> - deploy barbican (some bits of puppet missing we're sorting out)
>>> 
>>> More news next week.
>>> 
>>> Cheers,
>>> Ricardo
>>> 
>>>> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) 
>>>> <daneh...@cisco.com> wrote:
>>>> All,
>>>> 
>>>> Does anyone have experience deploying Magnum in a highly-available fashion?
>>>> If so, I'm interested in learning from your experience. My biggest 
>>>> unknown is the Conductor service. Any insight you can provide is 
>>>> greatly appreciated.
>>>> 
>>>> Regards,
>>>> Daneyon Hansen
>>>> 
>>>> _
>>>> _  OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: 
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> __
>>>  OpenStack Development Mailing List (not for usage questions)
>>> Unsubscri

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Hongbin Lu
Adrian,

I think we need a boarder set of inputs in this matter, so I moved the 
discussion from whiteboard back to here. Please check my replies inline.

> I would like to get a clear problem statement written for this.
> As I see it, the problem is that there is no safe place to put certificates 
> in clouds that do not run Barbican.
> It seems the solution is to make it easy to add Barbican such that it's 
> included in the setup for Magnum.
No, the solution is to explore an non-Barbican solution to store certificates 
securely.

> Magnum should not be in the business of credential storage when there is an 
> existing service focused on that need.
>
> Is there an issue with running Barbican on older clouds?
> Anyone can choose to use the builtin option with Magnum if hey don't have 
> Barbican.
> A known limitation of that approach is that certificates are not replicated.
I guess the *builtin* option you referred is simply placing the certificates to 
local file system. A few of us had concerns on this approach (In particular, 
Tom Cammann has gave -2 on the review [1]) because it cannot scale beyond a 
single conductor. Finally, we made a compromise to land this option and use it 
for testing/debugging only. In other words, this option is not for production. 
As a result, Barbican becomes the only option for production which is the root 
of the problem. It basically forces everyone to install Barbican in order to 
use Magnum.

[1] https://review.openstack.org/#/c/212395/ 

> It's probably a bad idea to replicate them.
> That's what Barbican is for. --adrian_otto
Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
agreed to have two phases of implementation and the statement was made by you 
[2].


#agreed Magnum will use Barbican for an initial implementation for certificate 
generation and secure storage/retrieval.  We will commit to a second phase of 
development to eliminating the hard requirement on Barbican with an alternate 
implementation that implements the functional equivalent implemented in Magnum, 
which may depend on libraries, but not Barbican.


[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: March-17-16 4:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

I have trouble understanding that blueprint. I will put some remarks on the 
whiteboard. Duplicating Barbican sounds like a mistake to me.

--
Adrian

> On Mar 17, 2016, at 12:01 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
> 
> The problem of missing Barbican alternative implementation has been raised 
> several times by different people. IMO, this is a very serious issue that 
> will hurt Magnum adoption. I created a blueprint for that [1] and set the PTL 
> as approver. It will be picked up by a contributor once it is approved.
> 
> [1] 
> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
> re
> 
> Best regards,
> Hongbin
> 
> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: March-17-16 2:39 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hi.
> 
> We're on the way, the API is using haproxy load balancing in the same way all 
> openstack services do here - this part seems to work fine.
> 
> For the conductor we're stopped due to bay certificates - we don't currently 
> have barbican so local was the only option. To get them accessible on all 
> nodes we're considering two options:
> - store bay certs in a shared filesystem, meaning a new set of 
> credentials in the boxes (and a process to renew fs tokens)
> - deploy barbican (some bits of puppet missing we're sorting out)
> 
> More news next week.
> 
> Cheers,
> Ricardo
> 
>> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) 
>> <daneh...@cisco.com> wrote:
>> All,
>> 
>> Does anyone have experience deploying Magnum in a highly-available fashion?
>> If so, I'm interested in learning from your experience. My biggest 
>> unknown is the Conductor service. Any insight you can provide is 
>> greatly appreciated.
>> 
>> Regards,
>> Daneyon Hansen
>> 
>> _
>> _  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cg

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Adrian Otto
I have trouble understanding that blueprint. I will put some remarks on the 
whiteboard. Duplicating Barbican sounds like a mistake to me.

--
Adrian

> On Mar 17, 2016, at 12:01 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
> 
> The problem of missing Barbican alternative implementation has been raised 
> several times by different people. IMO, this is a very serious issue that 
> will hurt Magnum adoption. I created a blueprint for that [1] and set the PTL 
> as approver. It will be picked up by a contributor once it is approved.
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store 
> 
> Best regards,
> Hongbin
> 
> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
> Sent: March-17-16 2:39 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hi.
> 
> We're on the way, the API is using haproxy load balancing in the same way all 
> openstack services do here - this part seems to work fine.
> 
> For the conductor we're stopped due to bay certificates - we don't currently 
> have barbican so local was the only option. To get them accessible on all 
> nodes we're considering two options:
> - store bay certs in a shared filesystem, meaning a new set of credentials in 
> the boxes (and a process to renew fs tokens)
> - deploy barbican (some bits of puppet missing we're sorting out)
> 
> More news next week.
> 
> Cheers,
> Ricardo
> 
>> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) 
>> <daneh...@cisco.com> wrote:
>> All,
>> 
>> Does anyone have experience deploying Magnum in a highly-available fashion?
>> If so, I’m interested in learning from your experience. My biggest 
>> unknown is the Conductor service. Any insight you can provide is 
>> greatly appreciated.
>> 
>> Regards,
>> Daneyon Hansen
>> 
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Hongbin Lu
The problem of missing Barbican alternative implementation has been raised 
several times by different people. IMO, this is a very serious issue that will 
hurt Magnum adoption. I created a blueprint for that [1] and set the PTL as 
approver. It will be picked up by a contributor once it is approved.

[1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store 

Best regards,
Hongbin

-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
Sent: March-17-16 2:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

Hi.

We're on the way, the API is using haproxy load balancing in the same way all 
openstack services do here - this part seems to work fine.

For the conductor we're stopped due to bay certificates - we don't currently 
have barbican so local was the only option. To get them accessible on all nodes 
we're considering two options:
- store bay certs in a shared filesystem, meaning a new set of credentials in 
the boxes (and a process to renew fs tokens)
- deploy barbican (some bits of puppet missing we're sorting out)

More news next week.

Cheers,
Ricardo

On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) <daneh...@cisco.com> 
wrote:
> All,
>
> Does anyone have experience deploying Magnum in a highly-available fashion?
> If so, I’m interested in learning from your experience. My biggest 
> unknown is the Conductor service. Any insight you can provide is 
> greatly appreciated.
>
> Regards,
> Daneyon Hansen
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Hongbin Lu
Thanks Adrian,

I think the Keystone approach will work. For others, please speak up if it 
doesn’t work for you.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-17-16 9:28 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

Hongbin,

I tweaked the blueprint in accordance with this approach, and approved it for 
Newton:
https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store

I think this is something we can all agree on as a middle ground, If not, I’m 
open to revisiting the discussion.

Thanks,

Adrian

On Mar 17, 2016, at 6:13 PM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:

Hongbin,

One alternative we could discuss as an option for operators that have a good 
reason not to use Barbican, is to use Keystone.

Keystone credentials store: 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#credentials-v3-credentials

The contents are stored in plain text in the Keystone DB, so we would want to 
generate an encryption key per bay, encrypt the certificate and store it in 
keystone. We would then use the same key to decrypt it upon reading the key 
back. This might be an acceptable middle ground for clouds that will not or can 
not run Barbican. This should work for any OpenStack cloud since Grizzly. The 
total amount of code in Magnum would be small, as the API already exists. We 
would need a library function to encrypt and decrypt the data, and ideally a 
way to select different encryption algorithms in case one is judged weak at 
some point in the future, justifying the use of an alternate.

Adrian


On Mar 17, 2016, at 4:55 PM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:

Hongbin,


On Mar 17, 2016, at 2:25 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

Adrian,

I think we need a boarder set of inputs in this matter, so I moved the 
discussion from whiteboard back to here. Please check my replies inline.


I would like to get a clear problem statement written for this.
As I see it, the problem is that there is no safe place to put certificates in 
clouds that do not run Barbican.
It seems the solution is to make it easy to add Barbican such that it's 
included in the setup for Magnum.
No, the solution is to explore an non-Barbican solution to store certificates 
securely.

I am seeking more clarity about why a non-Barbican solution is desired. Why is 
there resistance to adopting both Magnum and Barbican together? I think the 
answer is that people think they can make Magnum work with really old clouds 
that were set up before Barbican was introduced. That expectation is simply not 
reasonable. If there were a way to easily add Barbican to older clouds, perhaps 
this reluctance would melt away.


Magnum should not be in the business of credential storage when there is an 
existing service focused on that need.

Is there an issue with running Barbican on older clouds?
Anyone can choose to use the builtin option with Magnum if hey don't have 
Barbican.
A known limitation of that approach is that certificates are not replicated.
I guess the *builtin* option you referred is simply placing the certificates to 
local file system. A few of us had concerns on this approach (In particular, 
Tom Cammann has gave -2 on the review [1]) because it cannot scale beyond a 
single conductor. Finally, we made a compromise to land this option and use it 
for testing/debugging only. In other words, this option is not for production. 
As a result, Barbican becomes the only option for production which is the root 
of the problem. It basically forces everyone to install Barbican in order to 
use Magnum.

[1] https://review.openstack.org/#/c/212395/


It's probably a bad idea to replicate them.
That's what Barbican is for. --adrian_otto
Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
agreed to have two phases of implementation and the statement was made by you 
[2].


#agreed Magnum will use Barbican for an initial implementation for certificate 
generation and secure storage/retrieval.  We will commit to a second phase of 
development to eliminating the hard requirement on Barbican with an alternate 
implementation that implements the functional equivalent implemented in Magnum, 
which may depend on libraries, but not Barbican.


[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

The context there is important. Barbican was considered for two purposes: (1) 
CA signing capability, and (2) certificate storage. My willingness to implement 
an alternative was based on our need to get a certificate generation and 
signing solution that actua

Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Adrian Otto
ot for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

I have trouble understanding that blueprint. I will put some remarks on the 
whiteboard. Duplicating Barbican sounds like a mistake to me.

--
Adrian

On Mar 17, 2016, at 12:01 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

The problem of missing Barbican alternative implementation has been raised 
several times by different people. IMO, this is a very serious issue that will 
hurt Magnum adoption. I created a blueprint for that [1] and set the PTL as 
approver. It will be picked up by a contributor once it is approved.

[1]
https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
re

Best regards,
Hongbin

-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
Sent: March-17-16 2:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

Hi.

We're on the way, the API is using haproxy load balancing in the same way all 
openstack services do here - this part seems to work fine.

For the conductor we're stopped due to bay certificates - we don't currently 
have barbican so local was the only option. To get them accessible on all nodes 
we're considering two options:
- store bay certs in a shared filesystem, meaning a new set of
credentials in the boxes (and a process to renew fs tokens)
- deploy barbican (some bits of puppet missing we're sorting out)

More news next week.

Cheers,
Ricardo

On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) <daneh...@cisco.com> 
wrote:
All,

Does anyone have experience deploying Magnum in a highly-available fashion?
If so, I'm interested in learning from your experience. My biggest
unknown is the Conductor service. Any insight you can provide is
greatly appreciated.

Regards,
Daneyon Hansen

_
_  OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Hongbin Lu
Douglas,

I am not opposed to adopt Barbican in Magnum (In fact, we already adopted 
Barbican). What I am opposed to is a Barbican lock-in, which already has a 
negative impact on Magnum adoption based on our feedback. I also want to see an 
increase of Barbican adoption in the future, and all our users have Barbican 
installed in their clouds. If that happens, I have no problem to have a hard 
dependency on Barbican.

Best regards,
Hongbin

-Original Message-
From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com] 
Sent: March-18-16 9:45 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] High Availability

Hongbin,

I think Adrian makes some excellent points regarding the adoption of Barbican.  
As the PTL for Barbican, it's frustrating to me to constantly hear from other 
projects that securing their sensitive data is a requirement but then turn 
around and say that deploying Barbican is a problem.

I guess I'm having a hard time understanding the operator persona that is 
willing to deploy new services with security features but unwilling to also 
deploy the service that is meant to secure sensitive data across all of 
OpenStack.

I understand one barrier to entry for Barbican is the high cost of Hardware 
Security Modules, which we recommend as the best option for the Storage and 
Crypto backends for Barbican.  But there are also other options for securing 
Barbican using open source software like DogTag or SoftHSM.

I also expect Barbican adoption to increase in the future, and I was hoping 
that Magnum would help drive that adoption.  There are also other projects that 
are actively developing security features like Swift Encryption, and DNSSEC 
support in Desginate.  Eventually these features will also require Barbican, so 
I agree with Adrian that we as a community should be encouraging deployers to 
adopt the best security practices.

Regarding the Keystone solution, I'd like to hear the Keystone team's feadback 
on that.  It definitely sounds to me like you're trying to put a square peg in 
a round hole.

- Doug

On 3/17/16 8:45 PM, Hongbin Lu wrote:
> Thanks Adrian,
> 
>  
> 
> I think the Keystone approach will work. For others, please speak up 
> if it doesn't work for you.
> 
>  
> 
> Best regards,
> 
> Hongbin
> 
>  
> 
> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
> *Sent:* March-17-16 9:28 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] High Availability
> 
>  
> 
> Hongbin,
> 
>  
> 
> I tweaked the blueprint in accordance with this approach, and approved 
> it for Newton:
> 
> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
> re
> 
>  
> 
> I think this is something we can all agree on as a middle ground, If 
> not, I'm open to revisiting the discussion.
> 
>  
> 
> Thanks,
> 
>  
> 
> Adrian
> 
>  
> 
> On Mar 17, 2016, at 6:13 PM, Adrian Otto <adrian.o...@rackspace.com
> <mailto:adrian.o...@rackspace.com>> wrote:
> 
>  
> 
> Hongbin,
> 
> One alternative we could discuss as an option for operators that
> have a good reason not to use Barbican, is to use Keystone.
> 
> Keystone credentials store:
> 
> http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-ap
> i-v3.html#credentials-v3-credentials
> 
> The contents are stored in plain text in the Keystone DB, so we
> would want to generate an encryption key per bay, encrypt the
> certificate and store it in keystone. We would then use the same key
> to decrypt it upon reading the key back. This might be an acceptable
> middle ground for clouds that will not or can not run Barbican. This
> should work for any OpenStack cloud since Grizzly. The total amount
> of code in Magnum would be small, as the API already exists. We
> would need a library function to encrypt and decrypt the data, and
> ideally a way to select different encryption algorithms in case one
> is judged weak at some point in the future, justifying the use of an
> alternate.
> 
> Adrian
> 
> 
> On Mar 17, 2016, at 4:55 PM, Adrian Otto <adrian.o...@rackspace.com
> <mailto:adrian.o...@rackspace.com>> wrote:
> 
> Hongbin,
> 
> 
> On Mar 17, 2016, at 2:25 PM, Hongbin Lu <hongbin...@huawei.com
> <mailto:hongbin...@huawei.com>> wrote:
> 
> Adrian,
> 
> I think we need a boarder set of inputs in this matter, so I moved
> the discussion from whiteboard back to here. Please check my replies
> inline.
> 
> 
> I would like to get a clear problem statement written for this.
> As I see it, the

Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Hongbin Lu
OK. If using Keystone is not acceptable, I am going to propose a new approach:

· Store data in Magnum DB

· Encrypt data before writing it to DB

· Decrypt data after loading it from DB

· Have the encryption/decryption key stored in config file

· Use encryption/decryption algorithm provided by a library

The approach above is the exact approach used by Heat to protect hidden 
parameters [1]. Compared to the Barbican option, this approach is much lighter 
and simpler, and provides a basic level of data protection. This option is a 
good supplement to the Barbican option, which is heavy but provides advanced 
level of protection. It will fit into the use cases that users don’t want to 
install Barbican but want a basic protection.

If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.

If you don’t like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
<douglas.mendiza...@rackspace.com<mailto:douglas.mendiza...@rackspace.com>> 
wrote:
[snip]
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>

I believe that using Keystone for this is a mistake. As mentioned in the 
blueprint, Keystone is not encrypting the data so magnum would be on the hook 
to do it. So that means that if security is a requirement you'd have to 
duplicate more than just code. magnum would start having a larger security 
burden. Since we have a system designed to securely store data I think that's 
the best place for data that needs to be secure.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Daneyon Hansen (danehans)


> On Mar 17, 2016, at 11:41 AM, Ricardo Rocha  wrote:
> 
> Hi.
> 
> We're on the way, the API is using haproxy load balancing in the same
> way all openstack services do here - this part seems to work fine

I expected the API to work. Thanks for the confirmation. 
> 
> For the conductor we're stopped due to bay certificates - we don't
> currently have barbican so local was the only option. To get them
> accessible on all nodes we're considering two options:
> - store bay certs in a shared filesystem, meaning a new set of
> credentials in the boxes (and a process to renew fs tokens)
> - deploy barbican (some bits of puppet missing we're sorting out)

How funny. I had this concern and proposed a similar solution to hongbin over 
irc yesterday. I suggested we discuss this issue at Austin, as Barbican is 
becoming a barrier to Magnum adoption. Please keep this thread updated as you 
progress with your deployment and I'll do the same. 
> 
> More news next week.
> 
> Cheers,
> Ricardo
> 
> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans)
>  wrote:
>> All,
>> 
>> Does anyone have experience deploying Magnum in a highly-available fashion?
>> If so, I’m interested in learning from your experience. My biggest unknown
>> is the Conductor service. Any insight you can provide is greatly
>> appreciated.
>> 
>> Regards,
>> Daneyon Hansen
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev