This totally dropped off my radar; the call out from the SAI thread
reminded me. Thanks Benedict.
I think you raised some great points here about what a "minimum viable
testing" might look like for a new feature:
> New features should be required to include randomised integration tests
> that
Thanks for getting the ball rolling. I think we need to be a lot more
specific, though, and it may take some time to hash it all out.
For starters we need to distinguish between types of "done" - are we discussing:
- Release
- New Feature
- New Functionality (for an existing feature)
-
I like that the "we need a Definition of Done" seems to be surfacing. No
directed intent from opening this thread but it seems a serendipitous
outcome. And to reiterate - I didn't open this thread with the hope or
intent of getting all of us to agree on anything or explore what we should
or
Perhaps you could clarify what you personally hope we _should_ agree as a
project, and what you want us to _not_ agree (blossom in infinite variety)?
My view: We need to agree a shared framework for quality going forwards. This
will raise the bar to contributions, including above many that
>
> This section reads as very anti-adding tests to test/unit; I am 100% in
> favor of improving/creating our smoke, integration, regression,
> performance, E2E, etc. testing, but don't think I am as negative to
> test/unit, these tests are still valuable and more are welcome.
I am a strong
Thanks for starting discussion!
Replying to the thread with what I would have left as comments.
––
> As yet, we lack empirical evidence to quantify the relative stability or
> instability of our project compared to a peer cohort
I think it's more important that we set a standard for the
I am also not fully clear on the motives, but welcome anything which helps
bring in better and more robust testing; thanks for starting this.
Since I can not comment in the doc I have to copy/paste and put here... =(
Reality
> ...
> investing in improving our smoke and integration testing as
The purpose is purely to signal a point of view on the state of testing in
the codebase, some shortcomings of the architecture, and what a few of us
are doing and further planning to do about it. Kind of a "prompt discussion
if anyone has a wild allergic reaction to it, or encourage collaboration
>
> Can you please allow comments on the doc so we can leave feedback.
>
> Doc is view only; figured we could keep this to the ML.
>
That's a feature, not a bug.
Happy to chat here or on slack w/anyone. This is a complex topic so
long-form or high bandwidth communication is a better fit than
Can you please allow comments on the doc so we can leave feedback.
On Mon, Jul 13, 2020 at 2:16 PM Joshua McKenzie
wrote:
> Link:
>
> https://docs.google.com/document/d/1ktuBWpD2NLurB9PUvmbwGgrXsgnyU58koOseZAfaFBQ/edit#
>
>
> Myself and a few other contributors are working with this point of
Link:
https://docs.google.com/document/d/1ktuBWpD2NLurB9PUvmbwGgrXsgnyU58koOseZAfaFBQ/edit#
Myself and a few other contributors are working with this point of view as
our frame of where we're going to work on improving testing on the project.
I figured it might be useful to foster collaboration
11 matches
Mail list logo