Re: [openstack-dev] Evil Firmware
The physical function is the one with the real PCI config space, so as long as the host controls it then there should be minimal risk from the guests since they have limited access via the virtual functions--typically mostly just message-passing to the physical function. As long as its a whitelist of audited message handlers, thats fine. Of course, if the message handlers haven't been audited, who knows whats lurking in there. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Evil Firmware
On 17 January 2014 01:16, Chris Friesen chris.frie...@windriver.com wrote: On 01/16/2014 05:12 PM, CARVER, PAUL wrote: Jumping back to an earlier part of the discussion, it occurs to me that this has broader implications. There's some discussion going on under the heading of Neutron with regard to PCI passthrough. I imagine it's under Neutron because of a desire to provide passthrough access to NICs, but given some of the activity around GPU based computing it seems like sooner or later someone is going to try to offer multi-tenant cloud servers with the ability to do GPU based computing if they haven't already. I'd expect that the situation with PCI passthrough may be a bit different, at least in the common case. The usual scenario is to use SR-IOV to have a single physical device expose a bunch of virtual functions, and then a virtual function is passed through into a guest. That entirely depends on the card in question. Some cards support SRIOV and some don't (you wouldn't normally use SRIOV on a GPU, as I understand it, though you might reasonably expect it on a modern network card). Even on cards that do support SRIOV there's nothing stopping you assigning the whole card. But from the discussion here it seems that (whole card passthrough) + (reprorgrammable firmware) would be the danger, and programmatically there's no way to tell from the passthrough code in Nova whether any given card has programmable firmware. It's a fairly safe bet you can't reprogram firmware permanently from a VF, agreed. -- Ian. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Evil Firmware
On 17 January 2014 09:12, Robert Collins robe...@robertcollins.net wrote: The physical function is the one with the real PCI config space, so as long as the host controls it then there should be minimal risk from the guests since they have limited access via the virtual functions--typically mostly just message-passing to the physical function. As long as its a whitelist of audited message handlers, thats fine. Of course, if the message handlers haven't been audited, who knows whats lurking in there. The description doesn't quite gel with my understanding - SRIOV VFs *do* have a PCI space that you can map in, and a DMA as well, typically (which is virtualised via the page table for the VM). However, some functions of the card may not be controllable in that space (e.g., for network devices, VLAN encapsulation, promiscuity, and so on) and you may have to make a request from the VF in the VM to the PF in the host kernel. The message channels in question are implemented in the PF and VF drivers in the Linux kernel code (the PF end being the one where security matters, since a sufficiently malicious VM can try it on at the VF end and see what happens). I don't know whether you consider that audited enough. -- Ian. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Evil Firmware
Clint Byrum wrote: Excerpts from Alan Kavanagh's message of 2014-01-15 19:11:03 -0800: Hi Paul I posted a query to Ironic which is related to this discussion. My thinking was I want to ensure the case you note here (1) a tenant can not read another tenants disk.. the next (2) was where in Ironic you provision a baremetal server that has an onboard dish as part of the blade provisioned to a given tenant-A. then when tenant-A finishes his baremetal blade lease and that blade comes back into the pool and tenant-B comes along, I was asking what open source tools guarantee data destruction so that no ghost images or file retrieval is possible? Is that really a path worth going down, given that tenant-A could just drop evil firmware in any number of places, and thus all tenants afterward are owned anyway? Jumping back to an earlier part of the discussion, it occurs to me that this has broader implications. There's some discussion going on under the heading of Neutron with regard to PCI passthrough. I imagine it's under Neutron because of a desire to provide passthrough access to NICs, but given some of the activity around GPU based computing it seems like sooner or later someone is going to try to offer multi-tenant cloud servers with the ability to do GPU based computing if they haven't already. I would say that if we're concerned about evil firmware (and I'm certainly not saying we shouldn't be concerned) then GPUs are definitely an viable target for deploying evil firmware and NICs might be as well. Furthermore, there may be cases where direct access to local disk is desirable for performance reasons even if the thing accessing the disk is a VM rather than a bare metal server. Clint's warning about evil firmware should be seriously contemplated by anybody doing any work involving direct hardware access regardless of whether it's Ironic, Cinder, Neutron or anywhere else. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Evil Firmware
On 01/16/2014 05:12 PM, CARVER, PAUL wrote: Jumping back to an earlier part of the discussion, it occurs to me that this has broader implications. There's some discussion going on under the heading of Neutron with regard to PCI passthrough. I imagine it's under Neutron because of a desire to provide passthrough access to NICs, but given some of the activity around GPU based computing it seems like sooner or later someone is going to try to offer multi-tenant cloud servers with the ability to do GPU based computing if they haven't already. I'd expect that the situation with PCI passthrough may be a bit different, at least in the common case. The usual scenario is to use SR-IOV to have a single physical device expose a bunch of virtual functions, and then a virtual function is passed through into a guest. The physical function is the one with the real PCI config space, so as long as the host controls it then there should be minimal risk from the guests since they have limited access via the virtual functions--typically mostly just message-passing to the physical function. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev