Clint Byrum wrote: 

>Excerpts from Alan Kavanagh's message of 2014-01-15 19:11:03 -0800:
>> Hi Paul
>> 
>> I posted a query to Ironic which is related to this discussion. My thinking 
>> was I want to ensure the case you note here (1) " a tenant can not read 
>> >another tenants disk......" the next (2) was where in Ironic you provision 
>> a baremetal server that has an onboard dish as part of the blade 
>> >provisioned to a given tenant-A. then when tenant-A finishes his baremetal 
>> blade lease and that blade comes back into the pool and tenant-B >comes 
>> along, I was asking what open source tools guarantee data destruction so 
>> that no ghost images  or file retrieval is possible?
>> 

>Is that really a path worth going down, given that tenant-A could just
>drop evil firmware in any number of places, and thus all tenants afterward
>are owned anyway?

Jumping back to an earlier part of the discussion, it occurs to me that this 
has broader implications. There's some discussion going on under the heading of 
Neutron with regard to PCI passthrough. I imagine it's under Neutron because of 
a desire to provide passthrough access to NICs, but given some of the activity 
around GPU based computing it seems like sooner or later someone is going to 
try to offer multi-tenant cloud servers with the ability to do GPU based 
computing if they haven't already.

I would say that if we're concerned about evil firmware (and I'm certainly not 
saying we shouldn't be concerned) then GPUs are definitely an viable target for 
deploying evil firmware and NICs might be as well. Furthermore, there may be 
cases where direct access to local disk is desirable for performance reasons 
even if the thing accessing the disk is a VM rather than a bare metal server.

Clint's warning about evil firmware should be seriously contemplated by anybody 
doing any work involving direct hardware access regardless of whether it's 
Ironic, Cinder, Neutron or anywhere else. 

_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to