On Fri, 14 Oct 2016, George Dunlap wrote:
> On 14/10/16 07:36, Jan Beulich wrote:
> >>>> On 14.10.16 at 02:58, <sstabell...@kernel.org> wrote:
> >> On Fri, 14 Oct 2016, Andrew Cooper wrote:
> >>> There should be a high barrier to "Supported" status, because the cost
> >>> of getting it wrong is equally high.  However, there are perfectly
> >>> legitimate intermediate stages such as "Supported in these limited set
> >>> of circumstances".  A number of features are not plausibly for use in
> >>> production environments, but otherwise function fine for
> >>> development/investigatory purposes.  In these cases, something like "no
> >>> security support, but believed to be working fine" might be appropriate.
> >>
> >> I agree on this. I think we need an intermediate step: "working but not
> >> supported for security" is completely reasonable. When we say that it is
> >> "working", it should be because we have automated testing for it (I
> >> don't know if I would go as far as requiring it to be in OSSTest, any
> >> automated testing, even third party, would do). If it is not
> >> automatically tested, then it is just "best effort".
> > 
> > I don't think this is a reasonable expectation - how would you envision
> > testing the dozens of command line options alone, not to speak of
> > things depending on hardware characteristics?
> Well there may be situations where we can make reasonable exceptions.
> But it would certainly be a lot better if a feature wasn't considered
> "done" until there was something in place to make sure that it worked
> and continued to work, other than "we hope people use it and report any
> bugs they find".

It is difficult to generalize, because, as Jan wrote, some things come
with dozen of command line options, some other things come with none.
This is were we'll have to apply our judgment on a case by case basis.
But indeed the basic idea is that it is "done" when there is some
testing for it, where "some" is case specific. Community testing is
great, but automated testing is more reliable and predictable.

At the same time this is an Open Source community and we might get
contributions from people that aren't paid to work on Xen. We cannot
ask all contributors to write automated testing scripts, and maybe offer
security support. A contributor might write good code for a new feature
but refuse to write the testing for it. I think that's OK, we can take
that code, but then we need to clarify the state of that feature with
the community.

On one hand we don't want users to think a feature is fully stable and
supported when actually it is not. It negatively impacts Xen brand and
reputation. On the other hand we don't want to discourage contributions
by asking to commit to security support, automated testing, ABI
stability etc. from the start. Rome wasn't built in a day :-)
Can you imagine if we accepted ARM patches in Xen only after Xen on ARM
was ready for automated testing and security support?

The best way to achieve both goals at the same time is to clarify the
right level of quality/stability/support for each feature.

> The more interesting aspect of Stefano's suggestion here is whether
> there should be two levels of "supported" -- "supported" as in, "this
> works but it's not in our security boundary", and "supported" as in,
> "this works and it *is* in our security boundary".

I think the security team should be free to refuse to offer support for
something, which otherwise might be considered working and stable. It
might not even be a vote of distrust: after all the security team has
only a finite amount of resources, maybe not enough to cover all areas
of the project.

> But as we don't *yet* have such a decision-making process in place, I
> think we need to approach each change in an ad-hoc manner.  Having a
> discussion about *credit2* which includes the security aspects makes
> sense, and I don't think we need to wait until we've got a generalized
> framework to make a reasonable decision about that.

By no means I meant to delay Dario's work.

Xen-devel mailing list

Reply via email to