+1 I agree with Justin’s points.
On March 3, 2017 at 08:41:37, Justin Leet (justinjl...@gmail.com) wrote: +1 to both. Having this would especially ease a lot of testing that hits multiple areas (which there is a fair amount of, given that we're building pretty quickly). I do want to point out that adding this type of thing makes the speed of our builds and tests more important, because they already take up a good amount of time. There are obviously tickets to optimize these things, but I would like to make sure we don't pile too much on to every testing cycle before a PR. Having said that, I think the testing proposed is absolutely valuable enough to go forward with. Justin On Fri, Mar 3, 2017 at 8:33 AM, Casey Stella <ceste...@gmail.com> wrote: > I also propose, once this is done, that we modify the developer bylaws and > the github PR script to ensure that PR authors: > > - Update the acceptance tests where appropriate > - Run the tests as a smoketest > > > > On Fri, Mar 3, 2017 at 8:21 AM, Casey Stella <ceste...@gmail.com> wrote: > > > Hi All, > > > > After doing METRON-744, where I had to walk through a manual test of > every > > place that Stellar touched, it occurred to me that we should script this. > > It also occurred to me that some scripts that are run by the PR author to > > ensure no regressions and, eventually maybe, even run on an INFRA > instance > > of Jenkins would give all of us some peace of mind. > > > > I am certain that this, along with a couple other manual tests from other > > PRs, could form the basis of a really great regression acceptance-test > > suite and I'd like to propose that we do that, as a community. > > > > What I'd like to see from such a suite has the following characteristics: > > > > - Can be run on any Metron cluster, including but not limited to > > - Vagrant > > - AWS > > - An existing deployment > > - Can be *deployed* from ansible, but must be able to be deployed > > manually > > - With instructions in the readme > > - Tests should be idempotent and independent > > - Tear down what you set up > > > > I think between the Stellar REPL and the fundamental scriptability of the > > Hadoop services, we can accomplish these tests with a combination of > shell > > scripts and python. > > > > I propose we break this into the following parts: > > > > - Acceptance Testing Framework with a small smoketest > > - Baseline Metron Test > > - Send squid data through the squid topology > > - Add an threat triage alert > > - Ensure it gets through to the other side with alerts preserved > > - + Enrichment > > - Add an enrichment in the enrichment pipeline to the above > > - + Profiler > > - Add a profile with a tick of 1 minute to count per destination > > address > > - Base PCap test > > - Something like the manual test for METRON-743 ( > > https://github.com/apache/incubator-metron/pull/467# > issue-210285324 > > <https://github.com/apache/incubator-metron/pull/467# > issue-210285324> > > ) > > > > Thoughts? > > > > > > Best, > > > > Casey > > >