> On Nov 27, 2017, at 6:45 AM, Adi Roiban <a...@roiban.ro> wrote:
> 
> On 26 November 2017 at 22:30, Glyph <gl...@twistedmatrix.com> wrote:
>> 
>>> On Nov 26, 2017, at 10:23 AM, Adi Roiban <a...@roiban.ro> wrote:
>>> 
>>> Hi,
>>> 
>>> What is the procedure for accepting patches for platforms for which we
>>> don't have a continuous testing system ?
>>> 
>>> This is a follow up of the review from
>>> https://twistedmatrix.com/trac/ticket/9323
> [snip]
>>> 
>> 
>> To support a platform—i.e. to promise we will not break it in the future—we  
>> have to have a builder capable of running Twisted on that platform.
>> 
>> To accept a patch though, the important thing is to have test coverage. If 
>> the change in question is a platform-specific feature that there’s no way to 
>> test in a different environment, we would indeed need to block on the 
>> availability of a builder.
>> 
>> Quite often—as I believe is the case for the patch you’re referring to 
>> here—patches are adjusting behaviors which would be difficult to cleanly 
>> integration-test on the platform in question anyway, and the appropriate 
>> thing to do is to make some mock match the observed behavior of the platform 
>> in question.  In this case it would be nice if the reviewer could verify 
>> this, but if the patch submitter’s description of observed behavior is 
>> clear, and their test mock implements it correctly, I would say that we can 
>> give them the benefit of the doubt and accept such patches even if the 
>> reviewer doesn’t have such a platform available.  Assuming all the changed 
>> lines are covered adequately for our supported platforms’ behavior as well, 
>> of course, and we aren’t risking breaking them.
>> 
>> How do you feel about this answer?
> 
> Thanks for your comment and I think that I know what a reviewer should
> do for  https://twistedmatrix.com/trac/ticket/9323
> From the ticket description, we already have a test which fails on
> Cygwin and the PR does not add any new tests.
> So the options are:
> 
> A. Get a builder for cygwin and use the existing tests.
> B. Write new unit/integration tests with a fake Cygwin system.
> 
> -----
> 
> I understand the high-level process, but I have questions regarding the 
> details.
> 
> How can a reviewer (which does not have experience and access to an
> unsupported platform) verify whether the fake/surrogate implementation
> is correct?

At some level, I think it is OK to take the submitter's word for it.  The 
important thing about the code review is not like, courtroom-style adversarial 
disbelief, but rather, the idea that it is super easy to make mistakes while 
writing programs.  If the submitter clearly describes the same behavior in 
tests, in their implementation, in their ticket description and in their NEWS 
fragment, and the reviewer can verify that they're all consistent, then I don't 
think they need to be able to personally reproduce the behavior if it's on an 
obscure platform.

Ideally, as Jean-Paul suggests, the submitter would provide a verified fake 
<https://thoughtstreams.io/glyph/test-patterns/5952/> implementation.  In 
practice they may just provide a well-documented fake, since verifying the real 
implementation may be impossible programmatically, or impractically difficult.  
(However, a genuine mock, or a dummy, is definitely insufficient for testing.)

> That is without putting significant effort and reading the details
> about the unsupported platform, and in this process becoming an expert
> in that platform?

In this case, the subtle difference in pipe-handling behavior, which is shared 
at least in part with other platforms, should not require becoming a total 
expert in Cygwin's runtime.  If it does, we may in fact need to wait for a 
reviewer to come along who already has this expertise.

-glyph
_______________________________________________
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
https://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python

Reply via email to