The latter.
On Wed, Oct 10, 2018 at 5:48 AM Shane Ardell
wrote:
> Nick - To be clear, when you say you can never get them all to pass, do you
> mean you can never get all the tests to pass without protractor flake
> re-running the failing tests (ie. eventually all the tests pass in the
> end), o
Nick - To be clear, when you say you can never get them all to pass, do you
mean you can never get all the tests to pass without protractor flake
re-running the failing tests (ie. eventually all the tests pass in the
end), or do you mean you still have failing tests even after
protractor-flake does
This is really good feedback, Nick. I agree, we need them to be reliable
enough to not be a source of constant false positives prior to putting them
into the checklist.
On Thu, Oct 4, 2018 at 15:34 Nick Allen wrote:
> I think we still have an issue of reliability. I can never reliably get
> them
I think we still have an issue of reliability. I can never reliably get
them all to pass. I have no idea which failures are real. Am I the only
one that experiences this?
We need a reliable pass/fail on these before we talk about adding them to
the checklist. For example, I just tried to run t
I ran them locally a handful of times just now, and on average they took
approximately 15 minutes to complete.
On Tue, Oct 2, 2018, 18:22 Michael Miklavcic
wrote:
> @Shane Just how much time are we talking about, on average? I don't think
> many in the community have had much exposure to running
@Shane Just how much time are we talking about, on average? I don't think
many in the community have had much exposure to running the e2e tests in
their current form. It might still be worth it in the short term.
On Tue, Oct 2, 2018 at 10:20 AM Shane Ardell
wrote:
> The protractor-flake package
The protractor-flake package should catch and re-run false failures, so
people shouldn't get failing tests when they are done running. I just meant
that we often re-run flaky tests with protractor-flake, so it can take a
while to run and could increase the build time considerably.
On Tue, Oct 2, 2
Are the tests so brittle that, even with flaky, people will run upon false
failures as part of contributing a PR? If so, do we have a list of the
brittle ones (and the things that would disambiguate a true failure from a
false failure) that we can add to the documentation?
On Tue, Oct 2, 2018 at
I also would like to eventually have these tests automated. There are a
couple hurdles to setting up our e2e tests to run with our build. I think
the biggest hurdle is setting up a dedicated server with data for the e2e
tests to use. I would assume this requires funding, engineering support,
obfusc
I'd also like to make sure that clear instructions are provided (or linked
to) about how to run them. Also, we need to make sure the instructions are
rock-solid for running them.
Looking at
https://github.com/apache/metron/tree/master/metron-interface/metron-alerts#e2e-tests,
would someone who doe
I'm not super keen on expanding the steps to contribute, especially in an
avenue that should be automated.
That being said, I think that until we get to the point of automating the
e2e tests, it's sensible to add them to the checklist.
So, I would support it, but I would also urge us to move forwar
Hello everyone,
In another discussion thread from July, I briefly mentioned the idea of
adding a step to the pull request checklist asking contributors to run the
UI end-to-end tests. Since we aren't running e2e tests as part of the CI
build, it's easy for contributors to unintentionally break the
12 matches
Mail list logo