I see, thanks for explaining it to me Dima.

On Mon, May 30, 2016 at 10:18 AM, Dima Spivak <[email protected]> wrote:

> Running the install phase of the lifecycle will run integration tests
> (since that phase comes before), so that's by design. Pass the
> -DskipITs option to prevent that from happening.
>
> -Dima
>
> On Monday, May 30, 2016, Apekshit Sharma <[email protected]> wrote:
>
> > Okay, so it was because mvn install -Dtest=foo  always runs the
> integration
> > tests in addition to test foo.
> > Whereas if I do mvn test -Dtest=foo, it only runs the test foo.
> > This seems bug to me. Or is it by choice? If latter, what's the
> rationale?
> >
> > On Mon, May 30, 2016 at 2:37 AM, Apekshit Sharma <[email protected]
> > <javascript:;>> wrote:
> >
> > > Just a heads up, you might see flaky tests reappear in precommits
> because
> > > I had to reset the flaky list. Somehow integration tests started
> showing
> > up
> > > in the list which is weird since they don't run as part of
> trunk_matrix,
> > > and were screwing up Flaky-Tests build so had to flush them out.
> > >
> > > On Sun, May 22, 2016 at 8:19 PM, Todd Lipcon <[email protected]
> > <javascript:;>> wrote:
> > >
> > >> On Sun, May 22, 2016 at 10:12 AM, Stack <[email protected]
> > <javascript:;>> wrote:
> > >>
> > >> > On Fri, May 20, 2016 at 3:43 PM, Todd Lipcon <[email protected]
> > <javascript:;>> wrote:
> > >> >
> > >> > > On Fri, May 20, 2016 at 1:17 PM, Matteo Bertozzi <
> > >> > [email protected] <javascript:;>>
> > >> > > wrote:
> > >> > >
> > >> > > > any suggestion on how to make people aware of the tests being
> > flaky?
> > >> > > >
> > >> > >
> > >> > > You guys might consider doing something like what we do for Apache
> > >> Kudu
> > >> > > (incubating):
> > >> > >
> > >> > > http://dist-test.cloudera.org:8080/ has a dashboard (driven from
> > our
> > >> > > flaky-tests job) which shows the percent flakiness of each test,
> as
> > >> well
> > >> > as
> > >> > > a breakdown of pass/fail rates by revision. We don't automatically
> > >> email
> > >> > > these to the list or anything, currently, but would be pretty easy
> > to
> > >> set
> > >> > > up a cron job to do so.
> > >> > >
> > >> > > The dashboard is very helpful for prioritizing the de-flaking of
> the
> > >> > worst
> > >> > > offenders, and also useful to quickly drill down and grab failure
> > logs
> > >> > from
> > >> > > the flaky tests themselves.
> > >> > >
> > >> > >
> > >> > Would you suggest copy/paste of your current setup (a python daemon
> > and
> > >> a
> > >> > db instance IIRC)?
> > >> >
> > >>
> > >> Sure, you're welcome (and encouraged) to borrow/steal it. If you make
> > some
> > >> improvements, please let us know, though, so we can merge them back to
> > our
> > >> copy
> > >>
> > >> The code for the server is here:
> > >>
> > >>
> >
> https://github.com/apache/incubator-kudu/blob/master/build-support/test_result_server.py
> > >> The only bit that's kudu-specific is the 'parse_test_failure' module -
> > it
> > >> has some heuristics to try to pull out the failure error message from
> > our
> > >> tests, but could easily be left out.
> > >>
> > >> -Todd
> > >> --
> > >> Todd Lipcon
> > >> Software Engineer, Cloudera
> > >>
> > >
> > >
> > >
> > > --
> > >
> > > Regards
> > >
> > > Apekshit Sharma | Software Engineer, Cloudera | Palo Alto, California |
> > > 650-963-6311
> > >
> >
> >
> >
> > --
> >
> > Regards
> >
> > Apekshit Sharma | Software Engineer, Cloudera | Palo Alto, California |
> > 650-963-6311
> >
>



-- 

Regards

Apekshit Sharma | Software Engineer, Cloudera | Palo Alto, California |
650-963-6311

Reply via email to