On 08/07/2017 04:59 PM, Jan Beulich wrote: >>>> George Dunlap <george.dun...@citrix.com> 08/07/17 12:27 PM >>> >> So it seems that people are still not quite clear about what I'm proposing. > > And indeed your examples helped me understand better what you mean > (or at least I hope they did). > >> Suppose someone builds a livepatch with the correct compiler, with a >> correct patch (that would fix the bug if rebooted into a new >> hypervisor), with correct fix-up code. Suppose that the bug passes all >> reasonable testing; but that, *due to a bug in the tools*, the patch >> also gives PV guests access to hypervisor memory. Is this a security >> issue? Yes -- the human told it to do safe thing X ("build a livepatch >> based on correct inputs to fix this bug") and it did unsafe thing Y >> ("build a livepatch that opens up a new security hole"). > > There's one more factor here: The livepatch tools may behave properly > with one version of the compiler, and improperly with another.
I don't really understand the reasoning here. Is this your argument: "One can imagine a security-critical livepatch bug that only affects say, gcc 6.x and not gcc 5.x or 7.x. Therefore, we should never issue XSAs for any security-critical livepatch bugs." If we found that livepatching tools make an incorrect patch only when using gcc 5.x, and we have reason to believe that some people may be using gcc 5.x, then I think we should issue an XSA and say that it only affects people compiling xen with gcc 5.x. It probably would make sense to specify some range of compiler versions for which we will issue XSAs for the livepatch tools. A good baseline would be what versions of gcc Xen uses, and then we can restrict it further if we need to (for instance, if some versions of gcc are missing requisite features, or if they are just known to be buggy). And remember, this is not "We have tested all compiler versions and promise you there are no bugs." It's, "If someone finds a bug for this set of compilers, we will tell you about it so you can do something about it." >> We could even place more restrictions on the scope if we wanted to. We >> could say that we only support the livepatch tools generating patches >> for XSAs. > > For me, much depends on how tight such restrictions would be. I.e. > with the examples given above, how would we determine a canonical > livepatch-tools / hypervisor pair (or set of pairs)? After all tools > mis-behavior may be a result of some custom patch in someone's > derived tree. Well, suppose that we issued an XSA with a patch, and suppose it was later discovered that the patch opened up a different security hole when applied on the upstream tree. Would we issue another XSA and/or an update to the existing XSA? I think obviously yes we would. Suppose instead we issued an XSA with a patch, and that it was later discovered that the patch opened up a different security hole when applied on top of XenServer's patchqueue, but not on the baseline XenProject. Would we issue another XSA and/or an update to an existing XSA? The obvious *default* answer to that is "No; it's not practical for us to deal with software that is not inside the XenProject's control." One could imagine circumstances in which we issue statements or an XSA anyway, but that would the exception and not the rule. I think the same kind of thing would apply to the livepatch tools: *by default*, we only issue XSAs for the livepatch tools if they create security issues when generating blobs based on security patches issued by the XenProject, and on top of XenProject-released software. As always, if there's some unforeseen circumstance then someone could argue for an exception. >>> This is very similar to what XSA-155 was - the GCC compiler optimizations >>> added a nice jump table that was accessed twice. And the offset was >>> retrieved from the shared ring. >>> >>> But we didn't do an XSA-155 for the GCC compiler. That is we didn't >>> file a ticket with GCC saying 'Hey, your compiler can create an race >>> on shared memory. Could you make your compiler be smarter in these cases' >>> We instead wrote code with this optimization in mind with more >>> barriers. >> >> Right -- so the gcc compiler guys are using a specification that allows >> that behavior. So from their perspective, we told the compiler to do >> unsafe thing Y (or at least, said that we were OK with it doing unsafe >> thing Y), and it did unsafe thing Y -- a security issue for Xen, but not >> for gcc. If gcc had *violated* the spec when causing the security >> issue, then we certainly would have called that a security issue in gcc. > > But would we have issued an XSA? Wouldn't that rather be a CVE > against gcc then? This is changing the question slightly, from "Should X have security support", to "If X is to be security supported, what organization and process should be used to support them?" Obviously in the case of gcc, we would primarily handle the security issue the way the gcc project handles security issues (which may be nothing at all for all I know). (Although depending on the bug and the circumstances, we might still issue an advisory to raise awareness for downstreams who might have compiled Xen with a particular version of gcc.) If livepatch-tools were an external project run by somebody else, with their own security process, then we would report the issue to them and let them handle it. Since livepatch-tools is developed by Xen developers and for Xen downstreams and users, if it is to be security supported, then it seems to me that the obvious thing to do is to support it within the XenProject security response process. I mean, if someone *wants* to set up an independent organization with an independent security team and security process to handle the livepatch project, then I guess that would be OK with me -- I don't care so much *who* does the security support, as long as it gets done. -George _______________________________________________ Xen-devel mailing list Xenfirstname.lastname@example.org https://lists.xen.org/xen-devel