Agreed, we need to get the profiler, MaaS and the REST API deployed via the
mpack sooner rather than later.

On Wed, Mar 22, 2017 at 3:05 PM, Ryan Merriman <merrim...@gmail.com> wrote:

> I think we'll have non manual rest/web deployment soon regardless of this
> discussion.
>
> On Wed, Mar 22, 2017 at 2:00 PM, Ryan Merriman <merrim...@gmail.com>
> wrote:
>
> > I don't think a cluster installed by ansible is a prerequisite to using
> > ansible to integration test.  They would be completely separate modules
> > except maybe sharing some property or inventory files.  Just need to run
> > scripts and hit rest endpoints right?  Just an idea, maybe it's overkill.
> > I'm cool with rolling our own.
> >
> > On Wed, Mar 22, 2017 at 1:49 PM, Casey Stella <ceste...@gmail.com>
> wrote:
> >
> >> Maybe, but I'd argue that we would want this to be run against a
> >> non-ansible installed cluster.  For a first pass, I'd recommend just a
> set
> >> of shell scripts utilizing the REPL and the REST API along with shell
> >> commands.  Most of our capabilities are quite scriptable.
> >>
> >> On Wed, Mar 22, 2017 at 2:47 PM, Ryan Merriman <merrim...@gmail.com>
> >> wrote:
> >>
> >> > Bumping this thread.  Looks like we have several +1s so I propose we
> >> move
> >> > to the next step.  I'm anxious to get this done because these tests
> >> would
> >> > have saved me time over the last couple weeks.  The management UI in
> >> > https://github.com/apache/incubator-metron/pull/484 has a set of e2e
> >> tests
> >> > being maintained in another branch so those could also be included in
> >> this
> >> > test suite when the UI makes it into master.
> >> >
> >> > Ideas for an "Acceptance Testing Framework"?  Could Ansible be good
> fit
> >> for
> >> > this since we already have it in our stack?
> >> >
> >> > On Mon, Mar 6, 2017 at 1:01 PM, Michael Miklavcic <
> >> > michael.miklav...@gmail.com> wrote:
> >> >
> >> > > Ok, yes I agree. In my experience with e2e/acceptance tests, they're
> >> best
> >> > > kept general with an emphasis on verifying that all the plumbing
> works
> >> > > together. So yes, there are definite edge cases I think we'll want
> to
> >> > test
> >> > > here, but I say that with the caveat that I think we should ideally
> >> cover
> >> > > as many non-happy-path cases in unit and integration tests as
> >> possible.
> >> > As
> >> > > an example, I don't think it makes sense to cover most of the
> profiler
> >> > > windowing DSL language edge cases in acceptance tests instead of or
> in
> >> > > addition to unit/integration tests unless there is something
> specific
> >> to
> >> > > the integration with a given an environment that we think could be
> >> > > problematic.
> >> > >
> >> > > M
> >> > >
> >> > > On Mon, Mar 6, 2017 at 11:32 AM, Casey Stella <ceste...@gmail.com>
> >> > wrote:
> >> > >
> >> > > > No, I'm saying that they shouldn't be restricted to real-world
> >> > use-cases.
> >> > > > The E2E tests I laid out weren't real-world, but they did exercise
> >> the
> >> > > > components similar to real-world use-cases.  They should also be
> >> able
> >> > to
> >> > > be
> >> > > > able to tread outside of the happy-path for those use-cases.
> >> > > >
> >> > > > On Mon, Mar 6, 2017 at 6:30 PM, Michael Miklavcic <
> >> > > > michael.miklav...@gmail.com> wrote:
> >> > > >
> >> > > > > "I don't think acceptance tests should loosely associate with
> real
> >> > > uses,
> >> > > > > but they should
> >> > > > > be free to delve into weird non-happy-pathways."
> >> > > > >
> >> > > > > Not following - are you saying they should *tightly* associate
> >> with
> >> > > real
> >> > > > > uses and additonally include non-happy-path?
> >> > > > >
> >> > > > > On Fri, Mar 3, 2017 at 12:57 PM, Casey Stella <
> ceste...@gmail.com
> >> >
> >> > > > wrote:
> >> > > > >
> >> > > > > > It is absolutely not a naive question, Matt.  We don't have a
> >> lot
> >> > (or
> >> > > > > any)
> >> > > > > > docs about our integration tests; it's more of a "follow the
> >> lead"
> >> > > type
> >> > > > > of
> >> > > > > > thing at the moment, but that should be rectified.
> >> > > > > >
> >> > > > > > The integration tests spin up and down infrastructure
> >> in-process,
> >> > > some
> >> > > > of
> >> > > > > > which are real and some of which are mock versions of the
> >> services.
> >> > > > > These
> >> > > > > > are good for catching some types of bugs, but often things
> sneak
> >> > > > through,
> >> > > > > > like:
> >> > > > > >
> >> > > > > >    - Hbase and storm can't exist in the same JVM, so HBase is
> >> > mocked
> >> > > in
> >> > > > > >    those cases.
> >> > > > > >    - The FileSystem that we get for Hadoop is the
> >> > LocalRawFileSystem,
> >> > > > not
> >> > > > > >    truly HDFS.  There are differences and we've run into
> >> > > > > them..hilariously
> >> > > > > > at
> >> > > > > >    times. ;)
> >> > > > > >    - Things done statically in a bolt are shared across all
> >> bolts
> >> > > > because
> >> > > > > >    they all are threads in the same process
> >> > > > > >
> >> > > > > > It's good, it catches bugs, it lets us debug things easily, it
> >> runs
> >> > > > with
> >> > > > > > every single build automatically via travis.
> >> > > > > > It's bad because it's awkward to get the dependencies isolated
> >> > > > > sufficiently
> >> > > > > > for all of these components to get them to play nice in the
> same
> >> > JVM.
> >> > > > > >
> >> > > > > > Acceptance tests would be run against a real cluster, so they
> >> > would:
> >> > > > > >
> >> > > > > >    - run against real components, not testing or mock
> components
> >> > > > > >    - run against multiple nodes
> >> > > > > >
> >> > > > > > I can imagine a world where we can unify the two to a certain
> >> > degree
> >> > > in
> >> > > > > > many cases if we could spin up a docker version of Metron to
> >> run as
> >> > > > part
> >> > > > > of
> >> > > > > > the build, but I think in the meantime, we should focus on
> >> > providing
> >> > > > > both.
> >> > > > > >
> >> > > > > > I suspect the reference application is possibly inspiring my
> >> > > > suggestions
> >> > > > > > here, but I think the main difference here is that the
> reference
> >> > > > > > application is intended to be informational from a end-user
> >> > > > perspective:
> >> > > > > > it's detailing a use-case that users will understand.  I don't
> >> > think
> >> > > > > > acceptance tests should loosely associate with real uses, but
> >> they
> >> > > > should
> >> > > > > > be free to delve into weird non-happy-pathways.
> >> > > > > >
> >> > > > > > On Fri, Mar 3, 2017 at 2:16 PM, Matt Foley <ma...@apache.org>
> >> > wrote:
> >> > > > > >
> >> > > > > > > Automating stuff that now has to be done manually gets a big
> >> +1.
> >> > > > > > >
> >> > > > > > > But, Casey, could you please clarify the relationship
> between
> >> > what
> >> > > > you
> >> > > > > > > plan to do and the current “integration test” framework?
> Will
> >> > this
> >> > > > be
> >> > > > > in
> >> > > > > > > the form of additional integration tests? Or a different
> test
> >> > > > > framework?
> >> > > > > > > Can it be done in the integration test framework, rather
> than
> >> > > > creating
> >> > > > > > new
> >> > > > > > > mechanism?
> >> > > > > > >
> >> > > > > > > BTW, if that’s a naïve question, forgive me, but I could
> find
> >> > zero
> >> > > > > > > documentation for the existing integration test capability,
> >> > neither
> >> > > > > wiki
> >> > > > > > > pages nor READMEs nor Jiras.  If there are any docs, please
> >> point
> >> > > me
> >> > > > at
> >> > > > > > > them.  Or even archived email threads.
> >> > > > > > >
> >> > > > > > > There is also something called the “Reference Application”
> >> > > > > > > https://cwiki.apache.org/confluence/display/METRON/
> >> > > > > > > Metron+Reference+Application which sounds remarkably like
> what
> >> > you
> >> > > > > > > propose to automate.  Is there / can there / should there
> be a
> >> > > > > > relationship?
> >> > > > > > >
> >> > > > > > > Thanks,
> >> > > > > > > --Matt
> >> > > > > > >
> >> > > > > > > On 3/3/17, 7:40 AM, "Otto Fowler" <ottobackwa...@gmail.com>
> >> > wrote:
> >> > > > > > >
> >> > > > > > >     +1
> >> > > > > > >
> >> > > > > > >     I agree with Justin’s points.
> >> > > > > > >
> >> > > > > > >
> >> > > > > > >     On March 3, 2017 at 08:41:37, Justin Leet (
> >> > > justinjl...@gmail.com
> >> > > > )
> >> > > > > > > wrote:
> >> > > > > > >
> >> > > > > > >     +1 to both. Having this would especially ease a lot of
> >> > testing
> >> > > > that
> >> > > > > > > hits
> >> > > > > > >     multiple areas (which there is a fair amount of, given
> >> that
> >> > > we're
> >> > > > > > > building
> >> > > > > > >     pretty quickly).
> >> > > > > > >
> >> > > > > > >     I do want to point out that adding this type of thing
> >> makes
> >> > the
> >> > > > > speed
> >> > > > > > > of
> >> > > > > > >     our builds and tests more important, because they
> already
> >> > take
> >> > > > up a
> >> > > > > > > good
> >> > > > > > >     amount of time. There are obviously tickets to optimize
> >> these
> >> > > > > things,
> >> > > > > > > but
> >> > > > > > >     I would like to make sure we don't pile too much on to
> >> every
> >> > > > > testing
> >> > > > > > > cycle
> >> > > > > > >     before a PR. Having said that, I think the testing
> >> proposed
> >> > is
> >> > > > > > > absolutely
> >> > > > > > >     valuable enough to go forward with.
> >> > > > > > >
> >> > > > > > >     Justin
> >> > > > > > >
> >> > > > > > >     On Fri, Mar 3, 2017 at 8:33 AM, Casey Stella <
> >> > > ceste...@gmail.com
> >> > > > >
> >> > > > > > > wrote:
> >> > > > > > >
> >> > > > > > >     > I also propose, once this is done, that we modify the
> >> > > developer
> >> > > > > > > bylaws
> >> > > > > > >     and
> >> > > > > > >     > the github PR script to ensure that PR authors:
> >> > > > > > >     >
> >> > > > > > >     > - Update the acceptance tests where appropriate
> >> > > > > > >     > - Run the tests as a smoketest
> >> > > > > > >     >
> >> > > > > > >     >
> >> > > > > > >     >
> >> > > > > > >     > On Fri, Mar 3, 2017 at 8:21 AM, Casey Stella <
> >> > > > ceste...@gmail.com
> >> > > > > >
> >> > > > > > > wrote:
> >> > > > > > >     >
> >> > > > > > >     > > Hi All,
> >> > > > > > >     > >
> >> > > > > > >     > > After doing METRON-744, where I had to walk through
> a
> >> > > manual
> >> > > > > test
> >> > > > > > > of
> >> > > > > > >     > every
> >> > > > > > >     > > place that Stellar touched, it occurred to me that
> we
> >> > > should
> >> > > > > > script
> >> > > > > > >     this.
> >> > > > > > >     > > It also occurred to me that some scripts that are
> run
> >> by
> >> > > the
> >> > > > PR
> >> > > > > > > author
> >> > > > > > >     to
> >> > > > > > >     > > ensure no regressions and, eventually maybe, even
> run
> >> on
> >> > an
> >> > > > > INFRA
> >> > > > > > >     > instance
> >> > > > > > >     > > of Jenkins would give all of us some peace of mind.
> >> > > > > > >     > >
> >> > > > > > >     > > I am certain that this, along with a couple other
> >> manual
> >> > > > tests
> >> > > > > > from
> >> > > > > > >     other
> >> > > > > > >     > > PRs, could form the basis of a really great
> regression
> >> > > > > > > acceptance-test
> >> > > > > > >     > > suite and I'd like to propose that we do that, as a
> >> > > > community.
> >> > > > > > >     > >
> >> > > > > > >     > > What I'd like to see from such a suite has the
> >> following
> >> > > > > > >     characteristics:
> >> > > > > > >     > >
> >> > > > > > >     > > - Can be run on any Metron cluster, including but
> not
> >> > > limited
> >> > > > > to
> >> > > > > > >     > > - Vagrant
> >> > > > > > >     > > - AWS
> >> > > > > > >     > > - An existing deployment
> >> > > > > > >     > > - Can be *deployed* from ansible, but must be able
> to
> >> be
> >> > > > > deployed
> >> > > > > > >     > > manually
> >> > > > > > >     > > - With instructions in the readme
> >> > > > > > >     > > - Tests should be idempotent and independent
> >> > > > > > >     > > - Tear down what you set up
> >> > > > > > >     > >
> >> > > > > > >     > > I think between the Stellar REPL and the fundamental
> >> > > > > > scriptability
> >> > > > > > > of
> >> > > > > > >     the
> >> > > > > > >     > > Hadoop services, we can accomplish these tests with
> a
> >> > > > > combination
> >> > > > > > > of
> >> > > > > > >     > shell
> >> > > > > > >     > > scripts and python.
> >> > > > > > >     > >
> >> > > > > > >     > > I propose we break this into the following parts:
> >> > > > > > >     > >
> >> > > > > > >     > > - Acceptance Testing Framework with a small
> smoketest
> >> > > > > > >     > > - Baseline Metron Test
> >> > > > > > >     > > - Send squid data through the squid topology
> >> > > > > > >     > > - Add an threat triage alert
> >> > > > > > >     > > - Ensure it gets through to the other side with
> alerts
> >> > > > > preserved
> >> > > > > > >     > > - + Enrichment
> >> > > > > > >     > > - Add an enrichment in the enrichment pipeline to
> the
> >> > above
> >> > > > > > >     > > - + Profiler
> >> > > > > > >     > > - Add a profile with a tick of 1 minute to count per
> >> > > > > destination
> >> > > > > > >     > > address
> >> > > > > > >     > > - Base PCap test
> >> > > > > > >     > > - Something like the manual test for METRON-743 (
> >> > > > > > >     > > https://github.com/apache/
> incubator-metron/pull/467#
> >> > > > > > >     > issue-210285324
> >> > > > > > >     > > <https://github.com/apache/
> incubator-metron/pull/467#
> >> > > > > > >     > issue-210285324>
> >> > > > > > >     > > )
> >> > > > > > >     > >
> >> > > > > > >     > > Thoughts?
> >> > > > > > >     > >
> >> > > > > > >     > >
> >> > > > > > >     > > Best,
> >> > > > > > >     > >
> >> > > > > > >     > > Casey
> >> > > > > > >     > >
> >> > > > > > >     >
> >> > > > > > >
> >> > > > > > >
> >> > > > > > >
> >> > > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >
> >
>

Reply via email to