Re: [Openstack-operators] Dealing with ITAR in OpenStack private clouds

2017-03-22 Thread Blair Bethwaite
Could just avoid Glance snapshots and indeed Nova ephemeral storage
altogether by exclusively booting from volume with your ITAR volume type or
AZ. I don't know what other ITAR regulations there might be, but if it's
just what JM mentioned earlier then doing so would let you have ITAR and
non-ITAR VMs hosted on the same compute nodes as there would be no local
HDD storage involved.

On 23 Mar. 2017 2:28 am, "Jonathan D. Proulx"  wrote:

On Tue, Mar 21, 2017 at 09:03:36PM -0400, Davanum Srinivas wrote:
:Oops, Hit send before i finished
:
:https://info.massopencloud.org/wp-content/uploads/2016/
03/Workshop-Resource-Federation-in-a-Multi-Landlord-Cloud.pdf
:https://git.openstack.org/cgit/openstack/mixmatch
:
:Essentially you can do a single cinder proxy that can work with
:multiple cinder backends (one use case)

The mixmatch suff is interesting but it's designed ofr sharing rather
than exclusion, is very young and adds complexity that's likely not
wanted here. It is a good read though!

For Block Storage you can have 'volume types' with different back ends and
you can set quotas per project for each instance type.  I've used this
to deprecate old storage by setting quota on 'old' type to zero.
Presumably you you have an ITAR type that ITAR projects had quota on
and a nonITAR type for other projects and never the twains should
meet.

For VMS I use host aggregates and instance metadata to seperate
'special' hardware.  Again instance access can be per project so
having ITAR and notITAR aggregates and matiching instance types with
appopriate access lists can likely solve that.

I've not tried to do anything similar with Image Storage, so  not sure
if there's a way to restrict projects to specific glance stores.  IF
all images were nonITAR and only provisioned with restricted
info after launch *maybe* you could get away with that, though I
suppose you'd need to disallow snapshots for ITAR projects
at least...perhaps someone has a better answer here.

-Jon

:
:Thanks,
:Dims
:
:On Tue, Mar 21, 2017 at 8:59 PM, Davanum Srinivas 
wrote:
:> Jonathan,
:>
:> The folks from Boston University have done some work around this idea:
:>
:> https://github.com/openstack/mixmatch/blob/master/doc/
source/architecture.rst
:>
:>
:> On Tue, Mar 21, 2017 at 7:33 PM, Jonathan Mills 
wrote:
:>> Friends,
:>>
:>> I’m reaching out for assistance from anyone who may have confronted the
:>> issue of dealing with ITAR data in an OpenStack cloud being used in some
:>> department of the Federal Gov.
:>>
:>> ITAR (https://www.pmddtc.state.gov/regulations_laws/itar.html) is a less
:>> restrictive level of security than classified data, but it has some
thorny
:>> aspects to it, particularly where media is concerned:
:>>
:>> * you cannot co-mingle ITAR and non-ITAR data on the same physical hard
:>> drives, and any drive, once it has been “tainted” with any ITAR data,
is now
:>> an ITAR drive
:>>
:>> * when ITAR data is destroyed, a DBAN is insufficient — instead, you
:>> physically shred the drive.  No need to elaborate on how destructive
this
:>> can get if you accidentally mingle ITAR with non-ITAR
:>>
:>> Certainly the multi-tenant model of OpenStack holds great promise in
Federal
:>> agencies for supporting both ITAR and non-ITAR worlds, but great care
must
:>> be taken that *somehow* things like Glance and Cinder don’t get mixed
up.
:>> One must ensure that the ITAR tenants can only access Glance/Cinder in
ways
:>> such that their backend storage is physically separate from any non-ITAR
:>> tenants.  Certainly I understand that Glance/Cinder can support multiple
:>> storage backend types, such as File & Ceph, and maybe that is an avenue
to
:>> explore to achieving the physical separation.  But what if you want to
have
:>> multiple different File backends?
:>>
:>> Do the ACLs exist to ensure that non-ITAR tenants can’t access ITAR
:>> Glance/Cinder backends, and vice versa?
:>>
:>> Or…is it simpler to just build two OpenStack clouds….?
:>>
:>> Your thoughts will be most appreciated,
:>>
:>>
:>> Jonathan Mills
:>>
:>> NASA Goddard Space Flight Center
:>>
:>>
:>> ___
:>> OpenStack-operators mailing list
:>> OpenStack-operators@lists.openstack.org
:>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
:>>
:>
:>
:>
:> --
:> Davanum Srinivas :: https://twitter.com/dims
:
:
:
:--
:Davanum Srinivas :: https://twitter.com/dims
:
:___
:OpenStack-operators mailing list
:OpenStack-operators@lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list

Re: [Openstack-operators] Dealing with ITAR in OpenStack private clouds

2017-03-22 Thread Jonathan D. Proulx
On Tue, Mar 21, 2017 at 09:03:36PM -0400, Davanum Srinivas wrote:
:Oops, Hit send before i finished
:
:https://info.massopencloud.org/wp-content/uploads/2016/03/Workshop-Resource-Federation-in-a-Multi-Landlord-Cloud.pdf
:https://git.openstack.org/cgit/openstack/mixmatch
:
:Essentially you can do a single cinder proxy that can work with
:multiple cinder backends (one use case)

The mixmatch suff is interesting but it's designed ofr sharing rather
than exclusion, is very young and adds complexity that's likely not
wanted here. It is a good read though!

For Block Storage you can have 'volume types' with different back ends and
you can set quotas per project for each instance type.  I've used this
to deprecate old storage by setting quota on 'old' type to zero.
Presumably you you have an ITAR type that ITAR projects had quota on
and a nonITAR type for other projects and never the twains should
meet.

For VMS I use host aggregates and instance metadata to seperate
'special' hardware.  Again instance access can be per project so
having ITAR and notITAR aggregates and matiching instance types with
appopriate access lists can likely solve that.

I've not tried to do anything similar with Image Storage, so  not sure
if there's a way to restrict projects to specific glance stores.  IF
all images were nonITAR and only provisioned with restricted
info after launch *maybe* you could get away with that, though I
suppose you'd need to disallow snapshots for ITAR projects
at least...perhaps someone has a better answer here.

-Jon

:
:Thanks,
:Dims
:
:On Tue, Mar 21, 2017 at 8:59 PM, Davanum Srinivas  wrote:
:> Jonathan,
:>
:> The folks from Boston University have done some work around this idea:
:>
:> https://github.com/openstack/mixmatch/blob/master/doc/source/architecture.rst
:>
:>
:> On Tue, Mar 21, 2017 at 7:33 PM, Jonathan Mills  wrote:
:>> Friends,
:>>
:>> I’m reaching out for assistance from anyone who may have confronted the
:>> issue of dealing with ITAR data in an OpenStack cloud being used in some
:>> department of the Federal Gov.
:>>
:>> ITAR (https://www.pmddtc.state.gov/regulations_laws/itar.html) is a less
:>> restrictive level of security than classified data, but it has some thorny
:>> aspects to it, particularly where media is concerned:
:>>
:>> * you cannot co-mingle ITAR and non-ITAR data on the same physical hard
:>> drives, and any drive, once it has been “tainted” with any ITAR data, is now
:>> an ITAR drive
:>>
:>> * when ITAR data is destroyed, a DBAN is insufficient — instead, you
:>> physically shred the drive.  No need to elaborate on how destructive this
:>> can get if you accidentally mingle ITAR with non-ITAR
:>>
:>> Certainly the multi-tenant model of OpenStack holds great promise in Federal
:>> agencies for supporting both ITAR and non-ITAR worlds, but great care must
:>> be taken that *somehow* things like Glance and Cinder don’t get mixed up.
:>> One must ensure that the ITAR tenants can only access Glance/Cinder in ways
:>> such that their backend storage is physically separate from any non-ITAR
:>> tenants.  Certainly I understand that Glance/Cinder can support multiple
:>> storage backend types, such as File & Ceph, and maybe that is an avenue to
:>> explore to achieving the physical separation.  But what if you want to have
:>> multiple different File backends?
:>>
:>> Do the ACLs exist to ensure that non-ITAR tenants can’t access ITAR
:>> Glance/Cinder backends, and vice versa?
:>>
:>> Or…is it simpler to just build two OpenStack clouds….?
:>>
:>> Your thoughts will be most appreciated,
:>>
:>>
:>> Jonathan Mills
:>>
:>> NASA Goddard Space Flight Center
:>>
:>>
:>> ___
:>> OpenStack-operators mailing list
:>> OpenStack-operators@lists.openstack.org
:>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
:>>
:>
:>
:>
:> --
:> Davanum Srinivas :: https://twitter.com/dims
:
:
:
:-- 
:Davanum Srinivas :: https://twitter.com/dims
:
:___
:OpenStack-operators mailing list
:OpenStack-operators@lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Dealing with ITAR in OpenStack private clouds

2017-03-21 Thread Blair Bethwaite
On 22 March 2017 at 13:33, Jonathan Mills  wrote:
>
> To what extent is it possible to “lock” a tenant to an availability zone,
> to guarantee that nova scheduler doesn’t land an ITAR VM (and possibly the
> wrong glance/cinder) into a non-ITAR space (and vice versa)…
>

Yes, definitely a few different ways to skin that cat with Nova aggregates
and scheduler filters. The answer ultimately depends on what you want UX to
be like, i.e., for both default non-ITAR projects and ITAR specific
projects...?

For just that concern, Mike Lowe was chatting with me off list about using
> Regions….but I should probably let Mike speak for himself if he wants.
> Having never used anything other than the default “RegionOne” I can’t speak
> to the capabilities.
>

Could do certainly, but sounds like a whole lot of extra operational
effort/overhead versus logical separation. The answer probably depends on
both scale and process maturity.

-- 
Cheers,
~Blairo
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Dealing with ITAR in OpenStack private clouds

2017-03-21 Thread Jonathan Mills
Blaire,

To what extent is it possible to “lock” a tenant to an availability zone, to 
guarantee that nova scheduler doesn’t land an ITAR VM (and possibly the wrong 
glance/cinder) into a non-ITAR space (and vice versa)…

For just that concern, Mike Lowe was chatting with me off list about using 
Regions….but I should probably let Mike speak for himself if he wants.  Having 
never used anything other than the default “RegionOne” I can’t speak to the 
capabilities.

> On Mar 21, 2017, at 10:23 PM, Blair Bethwaite  
> wrote:
> 
> Dims, it might be overkill to introduce multi-Keystone + federation (I just 
> quickly skimmed the PDF so apologies if I have the wrong end of it)?
> 
> Jon, you could just have multiple cinder-volume services and backends. We do 
> this in the Nectar cloud - each site has cinder AZs matching nova AZs. By 
> default the API won't let you attach a volume to a host in a non-matching AZ, 
> maybe that's enough for you(?), but you could probably take it further with 
> other cinder scheduler filters.
> 
> On 22 March 2017 at 12:03, Davanum Srinivas  > wrote:
> Oops, Hit send before i finished
> 
> https://info.massopencloud.org/wp-content/uploads/2016/03/Workshop-Resource-Federation-in-a-Multi-Landlord-Cloud.pdf
>  
> 
> https://git.openstack.org/cgit/openstack/mixmatch 
> 
> 
> Essentially you can do a single cinder proxy that can work with
> multiple cinder backends (one use case)
> 
> Thanks,
> Dims
> 
> On Tue, Mar 21, 2017 at 8:59 PM, Davanum Srinivas  > wrote:
> > Jonathan,
> >
> > The folks from Boston University have done some work around this idea:
> >
> > https://github.com/openstack/mixmatch/blob/master/doc/source/architecture.rst
> >  
> > 
> >
> >
> > On Tue, Mar 21, 2017 at 7:33 PM, Jonathan Mills  > > wrote:
> >> Friends,
> >>
> >> I’m reaching out for assistance from anyone who may have confronted the
> >> issue of dealing with ITAR data in an OpenStack cloud being used in some
> >> department of the Federal Gov.
> >>
> >> ITAR (https://www.pmddtc.state.gov/regulations_laws/itar.html 
> >> ) is a less
> >> restrictive level of security than classified data, but it has some thorny
> >> aspects to it, particularly where media is concerned:
> >>
> >> * you cannot co-mingle ITAR and non-ITAR data on the same physical hard
> >> drives, and any drive, once it has been “tainted” with any ITAR data, is 
> >> now
> >> an ITAR drive
> >>
> >> * when ITAR data is destroyed, a DBAN is insufficient — instead, you
> >> physically shred the drive.  No need to elaborate on how destructive this
> >> can get if you accidentally mingle ITAR with non-ITAR
> >>
> >> Certainly the multi-tenant model of OpenStack holds great promise in 
> >> Federal
> >> agencies for supporting both ITAR and non-ITAR worlds, but great care must
> >> be taken that *somehow* things like Glance and Cinder don’t get mixed up.
> >> One must ensure that the ITAR tenants can only access Glance/Cinder in ways
> >> such that their backend storage is physically separate from any non-ITAR
> >> tenants.  Certainly I understand that Glance/Cinder can support multiple
> >> storage backend types, such as File & Ceph, and maybe that is an avenue to
> >> explore to achieving the physical separation.  But what if you want to have
> >> multiple different File backends?
> >>
> >> Do the ACLs exist to ensure that non-ITAR tenants can’t access ITAR
> >> Glance/Cinder backends, and vice versa?
> >>
> >> Or…is it simpler to just build two OpenStack clouds….?
> >>
> >> Your thoughts will be most appreciated,
> >>
> >>
> >> Jonathan Mills
> >>
> >> NASA Goddard Space Flight Center
> >>
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org 
> >> 
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> >> 
> >>
> >
> >
> >
> > --
> > Davanum Srinivas :: https://twitter.com/dims 
> 
> 
> 
> --
> Davanum Srinivas :: https://twitter.com/dims 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> 
> 
> 
> 
> -- 
> Cheers,
> 

Re: [Openstack-operators] Dealing with ITAR in OpenStack private clouds

2017-03-21 Thread Blair Bethwaite
Dims, it might be overkill to introduce multi-Keystone + federation (I just
quickly skimmed the PDF so apologies if I have the wrong end of it)?

Jon, you could just have multiple cinder-volume services and backends. We
do this in the Nectar cloud - each site has cinder AZs matching nova AZs.
By default the API won't let you attach a volume to a host in a
non-matching AZ, maybe that's enough for you(?), but you could probably
take it further with other cinder scheduler filters.

On 22 March 2017 at 12:03, Davanum Srinivas  wrote:

> Oops, Hit send before i finished
>
> https://info.massopencloud.org/wp-content/uploads/2016/
> 03/Workshop-Resource-Federation-in-a-Multi-Landlord-Cloud.pdf
> https://git.openstack.org/cgit/openstack/mixmatch
>
> Essentially you can do a single cinder proxy that can work with
> multiple cinder backends (one use case)
>
> Thanks,
> Dims
>
> On Tue, Mar 21, 2017 at 8:59 PM, Davanum Srinivas 
> wrote:
> > Jonathan,
> >
> > The folks from Boston University have done some work around this idea:
> >
> > https://github.com/openstack/mixmatch/blob/master/doc/
> source/architecture.rst
> >
> >
> > On Tue, Mar 21, 2017 at 7:33 PM, Jonathan Mills 
> wrote:
> >> Friends,
> >>
> >> I’m reaching out for assistance from anyone who may have confronted the
> >> issue of dealing with ITAR data in an OpenStack cloud being used in some
> >> department of the Federal Gov.
> >>
> >> ITAR (https://www.pmddtc.state.gov/regulations_laws/itar.html) is a
> less
> >> restrictive level of security than classified data, but it has some
> thorny
> >> aspects to it, particularly where media is concerned:
> >>
> >> * you cannot co-mingle ITAR and non-ITAR data on the same physical hard
> >> drives, and any drive, once it has been “tainted” with any ITAR data,
> is now
> >> an ITAR drive
> >>
> >> * when ITAR data is destroyed, a DBAN is insufficient — instead, you
> >> physically shred the drive.  No need to elaborate on how destructive
> this
> >> can get if you accidentally mingle ITAR with non-ITAR
> >>
> >> Certainly the multi-tenant model of OpenStack holds great promise in
> Federal
> >> agencies for supporting both ITAR and non-ITAR worlds, but great care
> must
> >> be taken that *somehow* things like Glance and Cinder don’t get mixed
> up.
> >> One must ensure that the ITAR tenants can only access Glance/Cinder in
> ways
> >> such that their backend storage is physically separate from any non-ITAR
> >> tenants.  Certainly I understand that Glance/Cinder can support multiple
> >> storage backend types, such as File & Ceph, and maybe that is an avenue
> to
> >> explore to achieving the physical separation.  But what if you want to
> have
> >> multiple different File backends?
> >>
> >> Do the ACLs exist to ensure that non-ITAR tenants can’t access ITAR
> >> Glance/Cinder backends, and vice versa?
> >>
> >> Or…is it simpler to just build two OpenStack clouds….?
> >>
> >> Your thoughts will be most appreciated,
> >>
> >>
> >> Jonathan Mills
> >>
> >> NASA Goddard Space Flight Center
> >>
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >
> >
> >
> > --
> > Davanum Srinivas :: https://twitter.com/dims
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Cheers,
~Blairo
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Dealing with ITAR in OpenStack private clouds

2017-03-21 Thread Jonathan Mills
Thank you, Dims.  I will read over this material.


> On Mar 21, 2017, at 9:03 PM, Davanum Srinivas  wrote:
> 
> Oops, Hit send before i finished
> 
> https://info.massopencloud.org/wp-content/uploads/2016/03/Workshop-Resource-Federation-in-a-Multi-Landlord-Cloud.pdf
> https://git.openstack.org/cgit/openstack/mixmatch
> 
> Essentially you can do a single cinder proxy that can work with
> multiple cinder backends (one use case)
> 
> Thanks,
> Dims
> 
> On Tue, Mar 21, 2017 at 8:59 PM, Davanum Srinivas  wrote:
>> Jonathan,
>> 
>> The folks from Boston University have done some work around this idea:
>> 
>> https://github.com/openstack/mixmatch/blob/master/doc/source/architecture.rst
>> 
>> 
>> On Tue, Mar 21, 2017 at 7:33 PM, Jonathan Mills  wrote:
>>> Friends,
>>> 
>>> I’m reaching out for assistance from anyone who may have confronted the
>>> issue of dealing with ITAR data in an OpenStack cloud being used in some
>>> department of the Federal Gov.
>>> 
>>> ITAR (https://www.pmddtc.state.gov/regulations_laws/itar.html) is a less
>>> restrictive level of security than classified data, but it has some thorny
>>> aspects to it, particularly where media is concerned:
>>> 
>>> * you cannot co-mingle ITAR and non-ITAR data on the same physical hard
>>> drives, and any drive, once it has been “tainted” with any ITAR data, is now
>>> an ITAR drive
>>> 
>>> * when ITAR data is destroyed, a DBAN is insufficient — instead, you
>>> physically shred the drive.  No need to elaborate on how destructive this
>>> can get if you accidentally mingle ITAR with non-ITAR
>>> 
>>> Certainly the multi-tenant model of OpenStack holds great promise in Federal
>>> agencies for supporting both ITAR and non-ITAR worlds, but great care must
>>> be taken that *somehow* things like Glance and Cinder don’t get mixed up.
>>> One must ensure that the ITAR tenants can only access Glance/Cinder in ways
>>> such that their backend storage is physically separate from any non-ITAR
>>> tenants.  Certainly I understand that Glance/Cinder can support multiple
>>> storage backend types, such as File & Ceph, and maybe that is an avenue to
>>> explore to achieving the physical separation.  But what if you want to have
>>> multiple different File backends?
>>> 
>>> Do the ACLs exist to ensure that non-ITAR tenants can’t access ITAR
>>> Glance/Cinder backends, and vice versa?
>>> 
>>> Or…is it simpler to just build two OpenStack clouds….?
>>> 
>>> Your thoughts will be most appreciated,
>>> 
>>> 
>>> Jonathan Mills
>>> 
>>> NASA Goddard Space Flight Center
>>> 
>>> 
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> 
>> 
>> 
>> 
>> --
>> Davanum Srinivas :: https://twitter.com/dims
> 
> 
> 
> -- 
> Davanum Srinivas :: https://twitter.com/dims


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Dealing with ITAR in OpenStack private clouds

2017-03-21 Thread Davanum Srinivas
Oops, Hit send before i finished

https://info.massopencloud.org/wp-content/uploads/2016/03/Workshop-Resource-Federation-in-a-Multi-Landlord-Cloud.pdf
https://git.openstack.org/cgit/openstack/mixmatch

Essentially you can do a single cinder proxy that can work with
multiple cinder backends (one use case)

Thanks,
Dims

On Tue, Mar 21, 2017 at 8:59 PM, Davanum Srinivas  wrote:
> Jonathan,
>
> The folks from Boston University have done some work around this idea:
>
> https://github.com/openstack/mixmatch/blob/master/doc/source/architecture.rst
>
>
> On Tue, Mar 21, 2017 at 7:33 PM, Jonathan Mills  wrote:
>> Friends,
>>
>> I’m reaching out for assistance from anyone who may have confronted the
>> issue of dealing with ITAR data in an OpenStack cloud being used in some
>> department of the Federal Gov.
>>
>> ITAR (https://www.pmddtc.state.gov/regulations_laws/itar.html) is a less
>> restrictive level of security than classified data, but it has some thorny
>> aspects to it, particularly where media is concerned:
>>
>> * you cannot co-mingle ITAR and non-ITAR data on the same physical hard
>> drives, and any drive, once it has been “tainted” with any ITAR data, is now
>> an ITAR drive
>>
>> * when ITAR data is destroyed, a DBAN is insufficient — instead, you
>> physically shred the drive.  No need to elaborate on how destructive this
>> can get if you accidentally mingle ITAR with non-ITAR
>>
>> Certainly the multi-tenant model of OpenStack holds great promise in Federal
>> agencies for supporting both ITAR and non-ITAR worlds, but great care must
>> be taken that *somehow* things like Glance and Cinder don’t get mixed up.
>> One must ensure that the ITAR tenants can only access Glance/Cinder in ways
>> such that their backend storage is physically separate from any non-ITAR
>> tenants.  Certainly I understand that Glance/Cinder can support multiple
>> storage backend types, such as File & Ceph, and maybe that is an avenue to
>> explore to achieving the physical separation.  But what if you want to have
>> multiple different File backends?
>>
>> Do the ACLs exist to ensure that non-ITAR tenants can’t access ITAR
>> Glance/Cinder backends, and vice versa?
>>
>> Or…is it simpler to just build two OpenStack clouds….?
>>
>> Your thoughts will be most appreciated,
>>
>>
>> Jonathan Mills
>>
>> NASA Goddard Space Flight Center
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Dealing with ITAR in OpenStack private clouds

2017-03-21 Thread Davanum Srinivas
Jonathan,

The folks from Boston University have done some work around this idea:

https://github.com/openstack/mixmatch/blob/master/doc/source/architecture.rst


On Tue, Mar 21, 2017 at 7:33 PM, Jonathan Mills  wrote:
> Friends,
>
> I’m reaching out for assistance from anyone who may have confronted the
> issue of dealing with ITAR data in an OpenStack cloud being used in some
> department of the Federal Gov.
>
> ITAR (https://www.pmddtc.state.gov/regulations_laws/itar.html) is a less
> restrictive level of security than classified data, but it has some thorny
> aspects to it, particularly where media is concerned:
>
> * you cannot co-mingle ITAR and non-ITAR data on the same physical hard
> drives, and any drive, once it has been “tainted” with any ITAR data, is now
> an ITAR drive
>
> * when ITAR data is destroyed, a DBAN is insufficient — instead, you
> physically shred the drive.  No need to elaborate on how destructive this
> can get if you accidentally mingle ITAR with non-ITAR
>
> Certainly the multi-tenant model of OpenStack holds great promise in Federal
> agencies for supporting both ITAR and non-ITAR worlds, but great care must
> be taken that *somehow* things like Glance and Cinder don’t get mixed up.
> One must ensure that the ITAR tenants can only access Glance/Cinder in ways
> such that their backend storage is physically separate from any non-ITAR
> tenants.  Certainly I understand that Glance/Cinder can support multiple
> storage backend types, such as File & Ceph, and maybe that is an avenue to
> explore to achieving the physical separation.  But what if you want to have
> multiple different File backends?
>
> Do the ACLs exist to ensure that non-ITAR tenants can’t access ITAR
> Glance/Cinder backends, and vice versa?
>
> Or…is it simpler to just build two OpenStack clouds….?
>
> Your thoughts will be most appreciated,
>
>
> Jonathan Mills
>
> NASA Goddard Space Flight Center
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators