Pinging again. I would like to learn how others using Kubernetes are 
separating different tenants using dynamic provisioning. Specifically if 
people are using RBD provisioning , are they allocating a StorageClass and 
hence a Ceph pool per customer ? What advantages or disadvantages do you 
see with this approach ?

What isolation mechanisms you use to make sure a PV of one customer is 
never seen/used by another customer ?

-Mayank

On Tuesday, June 6, 2017 at 11:32:55 PM UTC-7, Mayank wrote:
>
> Adding Kubernetes-Users group to gather thinking from the community.
>  
>
> On Tuesday, June 6, 2017 at 10:55:25 PM UTC-7, krma...@gmail.com wrote:
>>
>> Thanks all. The suggestion to use the reclaimPolicy of delete would not 
>> work.  As Jeff points out, in case of Stateful apps, we would like to not 
>> delete the volumes immediately  but keep them around for some time and 
>> garbage collect them later.
>>
>> @David Could you elaborate on the inefficient utilization when you 
>> mention the following  ?"If your set of tenants is very static, i guess you 
>> could have one StorageClass per tenant and only use "recycle" reclaim 
>> policy (which seems to be what you're advocating). But this seems pretty 
>> inefficient from a utilization standpoint, as you'd end up accumulating the 
>> max number of PVs used by each tenant."
>>
>> @David we are interested in RBD to start with. We would also be doing EBS 
>> Volumes as well. 
>>
>> @Mike yes looks like encryption seems like a reasonable solution as long 
>> as we can separate the encryption key per tenant and only the tenant has 
>> access to its own encryption keys. EBS volumes are the only ones supporting 
>> encryption. So for RBD , we are out of luck. Is there a general pattern of 
>> doing encryption with external provisioning  ? What about incorporating in 
>> in-tree plugins ?
>>
>> @Clayton Doesn't OpenShift have  use cases for keeping the volumes around 
>> after the pods are gone ? Does OpenShift do any kind of multi tenancy on 
>> top ?
>>
>> @Tim thanks. Not advocating that PV should or should not be namespaced 
>> but just trying to understand the rationale for it. One thinking was if 
>> PV's were dynamically provisioned in the namespace of the customer, that 
>> might further limit their access.
>>
>> @Jeff i think your understanding of the problem is correct. But its 
>> possible, i am solving the wrong problem and thats why i am seeking out for 
>> help.
>>
>>
>> Overall i want a multi tenant model, where:-
>> -- accidentally its not possible for one tenant to mount a volume created 
>> by another tenant
>> -- its more secure and attacks /compromise limit the surface area of 
>> attacks and access to volumes
>>
>> In the absence of encryption if there is a Kubernetes native way of 
>> ACL'ing the PV's that doesnt rely on the underlying storage implementation 
>> would be great. 
>>
>>
>> On Monday, June 5, 2017 at 9:17:49 AM UTC-7, Jeff Vance wrote:
>>>
>>> "... guarantee that a volume allocated to one customer can never be 
>>> accidentally allocated/mounted/accessed by another customer"
>>>
>>> I don't see "delete" as  the answer here since this deletes the actual 
>>> data in the volume. What if the customer has legacy data or important data 
>>> that lives beyond the pod(s) accessing it? Perhaps an "exclusive" 
>>> accessMode might help? FSGroup IDs can control access and there's been talk 
>>> about  adding ACLs to PVs. Or, maybe I've misunderstood the question?
>>>
>>> jeff
>>>
>>> On Saturday, June 3, 2017 at 11:39:16 PM UTC-7, krma...@gmail.com wrote:
>>>>
>>>> My team is currently trying to enable Stateful Apps for our internal 
>>>> customers. One requirement that keeps coming up is how to isolate PV's of 
>>>> one internal customer from PV's of another internal customer. 
>>>>
>>>> I see the following isolation mechanisms:-
>>>> - A PV when bound to a PVC(inside namespace A) cannot be bound to 
>>>> another PVC(inside namespace B) unless the unbind happens and hence are 
>>>> exclusive. 
>>>> - When using StorageClass, A PV of certain class can only be bound to 
>>>> PVC of the same class. So that means PVC(of class A) can only be bound to 
>>>> PV(of class A). This allows a PV allocated to one customer to not 
>>>> accidently get allocated to another customer)
>>>>
>>>> While the above isolation is good, its not enough(as i understand it). 
>>>> In a multi -tenant environment we want mechanisms which can guarantee that 
>>>> a volume allocated to one customer can never be accidentally 
>>>> allocated/mounted/accessed by another customer. 
>>>>
>>>> What is Kubernetes recommendation , on how to achieve this isolation ?
>>>>
>>>> Few more questions:-
>>>> - Why are Persistent Volumes not namespaced ?
>>>> - Is one or more  StorageClass'es per Customer a good multi tenancy 
>>>> model ? What other recommendations we have ?
>>>>
>>>>
>>>> Would love to hear the general thinking and around this from both 
>>>> developers and community
>>>> -Mayank
>>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to