jenkins run all tests continuously build target that only runs the
@AwaitsFix group, and overrides haltonfailure when calling the junit
macro. (which would save us an extra jenkins run)
Yep, doable.
decreased the likelyhood of failure, but didn't completely fix the
problem -- a test might
: This is doable by enabling/disabling test groups. A new build plan
: would need to be created that would do:
:
: ant -Dtests.haltonfailure=false -Dtests.awaitsfix=true
: -Dtests.unstable=true test
right ...that's an idea that came up the other day when i was talking to
simon at revolution
That was really the main question i had, as someone not very familiar with
the internals of JUnit, is wether it was possible for our test runner to
make the ultimate decision about the success/fail status of the entire
run based on the annotations of the tests that fail/succed
There are two
: as a build artifact). Yet another problem is that jenkins wouldn't
: _fail_ on such pseudo-failures because the set of JUnit statuses is
: not extensible (it'd be something like FAILED+IGNORE) so we'd need to
That was really the main question i had, as someone not very familiar with
the
So, I started thinking about it -- I can implement something that will
report failures (much like we do right now) it's quite tricky to fit
it into the reporting system and continuous integration system. Here's
why -- if a test doesn't fail then its output (sysout/syserrs) are not
currently
On Sun, May 6, 2012 at 2:39 PM, Dawid Weiss
dawid.we...@cs.put.poznan.pl wrote:
Any ideas? Hoss -- how do you envision monitoring of these tests? Manually?
If the tests are run many times a day, it would be great to get a
daily report of the percent of time the tests pass. Then if it goes
from
If the tests are run many times a day, it would be great to get a
daily report of the percent of time the tests pass. Then if it goes
from 5% to 50%, we can go uh-oh...
Yeah, well... but this is beyond the runner as it aggregates over time
-- it looks like a jenkins plugin that would analyze
On Sun, May 6, 2012 at 3:38 PM, Dawid Weiss
dawid.we...@cs.put.poznan.pl wrote:
I also admit I've never seen anything like
this -- a suite of tests with an allowed failure ratio over time and a
threshold that would trigger a warning...
Not so much an allowed failure rate... more of it fails
Not a problem. Ill be at work on monday. Can you file an issue an assign it
to me please? Im currently on mobile only. Dawid
On May 5, 2012 5:08 AM, Yonik Seeley yo...@lucidimagination.com wrote:
On Fri, May 4, 2012 at 6:29 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
Dawid:
With
It's a pretty useful technique especially from CI perspective. I use it via
JUnit's assumptions. Failed assumption is shown as an ignored test.
On Sat, May 5, 2012 at 2:29 AM, Chris Hostetter hossman_luc...@fucit.orgwrote:
Dawid:
With the new test runner you created, would it be possible to
One other way to do it that is already implemented is to run full tests
without failing on failures and only touch some marker file to fail at the
end. Ant test-help gives a hint on how to run tests this way currently.
Dont know how it'd play with others - speak up.
On May 5, 2012 11:25 AM,
On Fri, May 4, 2012 at 6:29 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
Dawid:
With the new test runner you created, would it be possible to setup an
annotation that we could use instead to indicate that a test should in fact
be run, and if it fails, include the failure info in the
12 matches
Mail list logo