Still noisy, waiting for the reference impl to untangle.
Short form:
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 136 failures
Week: 1 had 185 failures
Week: 2 had 210 failures
Week: 3 had 112 failures
Failures in Hoss' reports in ever
Unfortunately, the reference impl is creating quite a bit of noise in Hoss’
rollups. That said, I have a mail filter for test failures that puts the
reference impl tests in a different mail folder and my sense is that the
regular branch is getting an increasing number of failures.
If I have the
Still seeing quite a bit of noise due to the reference impl. That said, we do
have a reproducible error for TestRandomDVFaceting both 8x and master, see
SOLR-14990.
Meanwhile, here’s the report for this week.
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0
Not much change this week, still getting considerable noise from the reference
impl.
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 110 failures
Week: 1 had 150 failures
Week: 2 had 174 failures
Week: 3 had 142 failures
Failures in Hoss'
Still working through the failures on the reference impl, so AFAIK, the tests
failing large percentages of the time are on that branch.
Processing file (History bit 3): HOSS-2020-10-26.csv
Processing file (History bit 2): HOSS-2020-10-19.csv
Processing file (History bit 1): HOSS-2020-10-12.csv
Pr
The BadApple report remains skewed as the results include the reference impl so
this is mostly in case people are curious….
I expect next week to see an uptick in the number of tests that have failed
each of the last 4 weeks, that’ll be when the reference-impl parts of the
report kick in
Mostly for historical context for a while, It includes the reference impl so
the stats will be skewed from now until we integrate it all.
Short form:
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 142 failures
Week: 1 had 153 failures
Week: 2 had 51
same.
Uwe
-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
https://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Erick Erickson
> Sent: Monday, August 24, 2020 3:59 PM
> To: dev@lucene.apache.org
> Subject: BadApple report
>
> We have some pre
We have some pretty frequent failures, see:
http://fucit.org/solr-jenkins-reports/failure-report.html
I’m pretty sure LBSolrClientTest has been addressed. I’m looking at what commit
caused TestConfigOverlay to start failing…
This can be a little hard to interpret since it includes tests that ha
Failures in Hoss' reports for the last 4 rollups.
There were 242 unannotated tests that failed in Hoss' rollups. Ordered by the
date I downloaded the rollup file, newest->oldest. See above for the dates the
files were collected
These tests were NOT BadApple'd or AwaitsFix'd
Failures in
ow no change to HDFS stuff. Starting June/July failing regularly.
>
> Kevin Risden
>
>
>
> On Wed, Aug 12, 2020 at 9:03 AM Erick Erickson
> wrote:
>
>> I have the weekly rollups (with a few gaps) going back to about April
>> 2018, but nothing’s been done to try t
Didn’t think at first (only one cup of coffee). Here’s the Emails that test
appears in, the formatting is poor…
After that is the raw data from Hoss’ rollups that might be easier to ingest.
I have 1.3G of this kind of historical data, I’ve had vague thoughts about
putting it someplace accessibl
Risden
On Wed, Aug 12, 2020 at 9:03 AM Erick Erickson
wrote:
> I have the weekly rollups (with a few gaps) going back to about April
> 2018, but nothing’s been done to try to make them generally available. Each
> BadApple report has rates for the last 4 weeks in the attached file, jus
I have the weekly rollups (with a few gaps) going back to about April 2018, but
nothing’s been done to try to make them generally available. Each BadApple
report has rates for the last 4 weeks in the attached file, just below
"Failures over the last 4 weeks, but not every week. Ordered
Do we have any long term (aka "longitudinal") pass/fail rates for tests?
SharedFSAutoReplicaFailoverTest in particular is kinda-sorta tied to HDFS,
and that's going away to a plug-in for 9.0. The shared file system notion
isn't well supported in SolrCloud, I think.
~ David Smiley
Apache Lucene/S
Merged (thanks Mike D!).
Atri
On Tue, Aug 11, 2020 at 5:32 PM Erick Erickson wrote:
>
> Great, thanks! Let me know when you push it, I can beast the test again.
>
> > On Aug 11, 2020, at 3:48 AM, Atri Sharma wrote:
> >
> > I investigated testRequestRateLimiters and hardened the tests up:
> >
>
Great, thanks! Let me know when you push it, I can beast the test again.
> On Aug 11, 2020, at 3:48 AM, Atri Sharma wrote:
>
> I investigated testRequestRateLimiters and hardened the tests up:
>
> https://github.com/apache/lucene-solr/pull/1736
>
> This will stop testConcurrentRequests from fa
I investigated testRequestRateLimiters and hardened the tests up:
https://github.com/apache/lucene-solr/pull/1736
This will stop testConcurrentRequests from failing and should
hopefully stop testSlotBorrowing as well. If testSlotBorrowing
continues to fail, I will have to rethink the test.
On Mo
OK, thanks. I’m not really annotating things at this point, although
occasionally removing some that haven’t failed in a long time.
> On Aug 10, 2020, at 1:44 PM, Tomás Fernández Löbbe
> wrote:
>
> Hi Erick,
> I've introduced and later fixed a bug in TestConfig. It hasn't failed since,
> so p
Hi Erick,
I've introduced and later fixed a bug in TestConfig. It hasn't failed
since, so please don't annotate it.
On Mon, Aug 10, 2020 at 7:47 AM Erick Erickson
wrote:
> We’re backsliding some. I encourage people to look at:
> http://fucit.org/solr-jenkins-reports/failure-report.html, we have
We’re backsliding some. I encourage people to look at:
http://fucit.org/solr-jenkins-reports/failure-report.html, we have a number of
ill-behaved tests, particularly TestRequestRateLimiter,
TestBulkSchemaConcurrent, TestConfig, SchemaApiFailureTest and
TestIndexingSequenceNumbers…
Raw fail co
There are several tests that are causing a lot of noise:
SharedFSAutoReplicaFailoverTest is failing 90%+ of the time.
TestBulkSchemaConcurrent 31%
StressHdfsTest 16%
SchemaApiFailureTest 13.88%
I encourage people to look at:
http://fucit.org/solr-jenkins-reports/failure-report.html and see if a
Short form:
Processing file (History bit 3): HOSS-2020-07-27.csv
Processing file (History bit 2): HOSS-2020-07-20.csv
Processing file (History bit 1): HOSS-2020-07-13.csv
Processing file (History bit 0): HOSS-2020-07-06.csv
Number of AwaitsFix: 33 Number of BadApples: 4
**Annotated tests that
Well, that’s one way to reduce the number of SuppressWarnings… cut out massive
amounts of code ;)….
SuppressWarnings count: last week: 5,353, this week: 4,835, delta -518
We had quite a spike in the raw number of tests that have failed at least once
in the last week:
Raw fail count by week tot
Actaully, pretty good. The attached file has a lot of noise in it that’s a
listing of the files that have more or less SuppressWarnings annotations than
last week, the delta is -19. It’s a crude measure, I can replace N
SuppressWarnings in a class with one for the entire class, but it’s also eas
Megan:
There are a number of tests that have been flagged by some devs
that, no matter what, should _not_ be annotated with BadApple or
AwaitsFix and that’s just a list to remind me what they are.
It’s not much of a deal, though, because I’m not doing much annotating
lately. The original process
Hi Erick,
I'm wondering what is meant by "DO NOT ANNOTATE LIST" at the start of the
report? Better yet, can you please link to the scraping tool used to
generate the report?
Thank you!
Megan
On Mon, Jul 6, 2020 at 8:07 AM Erick Erickson
wrote:
> Holding fairly steady, but IDK whether Hoss’ scr
Holding fairly steady, but IDK whether Hoss’ scraping is getting data from
Uwe’s machines, thought I saw an e-mail go by about that.
this is the first report where the suppresswarnings stats mean anything.
Full report attached:
DO NOT ENABLE LIST:
MoveReplicaHDFSTest.testFailedMove
Move
Holding fairly steady.
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 26 failures
Week: 1 had 26 failures
Week: 2 had 34 failures
Week: 3 had 128 failures
This week’s report includes the SuppressWarnings summary. This is really the
baseline, I ad
Not a bad week all told, but something seems a little odd, I remember a lot
more e-mails going by, but perhaps it’s just these 26 tests failing repeatedly.
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 26 failures
Week: 1 had 34 failures
Week: 2 ha
The number of chronically failing tests dropped considerably this past week,
whether that’s an anomaly or not is a good question.
I’ve finished the SuppressWarnings annotations, so next week I _should_ be able
to include how many new SuppressWarnings have been added to the code and have
it mean
Thanks for letting me know Tomás
As useful as Hoss’ rollups are, there’s always a lag to deal with, sounds like
this is one.
> On Jun 8, 2020, at 2:26 PM, Tomás Fernández Löbbe
> wrote:
>
> Thanks for keeping an eye Erick. I took a quick look at the
> "TestIndexSearcher" failures and I think
Thanks for keeping an eye Erick. I took a quick look at the
"TestIndexSearcher" failures and I think they're related to SOLR-14525.
Should be fixed after this[1] commit by Noble.
[1] https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5827ddf
On Mon, Jun 8, 2020 at 7:52 AM Erick Erickson
wro
If people don’t know about:
http://fucit.org/solr-jenkins-reports/suspicious-failure-report.html, I
strongly recommend you periodically check it. It reports tests that have
changed their failure rates lately. There are three currently:
"org.apache.solr.search.TestIndexSearcher","testSearcherLis
>> the nonce, ignore it. Eventually, when all the warnings are fixed or
>> suppressed, I will be advocating for _not_ introducing new warnings at least
>> on Master. To encourage this, I want un-suppressed warnings to become
>> compile-time errors.
>>
>> That’ll
encourage this, I want un-suppressed warnings to become
> compile-time errors.
>
> That’ll tempt people to just add @SuppressWarnings, and I don’t think that’s
> a proper fix, so the BadApple report will flag files that have more
> @SuppressWarnings than they did last week and I’l
at least on Master. To
encourage this, I want un-suppressed warnings to become compile-time errors.
That’ll tempt people to just add @SuppressWarnings, and I don’t think that’s a
proper fix, so the BadApple report will flag files that have more
@SuppressWarnings than they did last week and I’ll
> Hoss’s rollups are here:
> http://fucit.org/solr-jenkins-reports/failure-report.html which show the
> rates, but not where they came from.
If I click on a particular test entry on "failure-report.html", I'm
presented with dialog with links for each failure. Clicking that link
takes me to a fi
Thanks that helps. I'll try to have a look at some of the failures related
to areas I know.
Ilan
On Mon, May 25, 2020 at 7:07 PM Erick Erickson
wrote:
> Ilan:
>
> That’s, unfortunately, not an easy question. Hoss’s rollups are here:
> http://fucit.org/solr-jenkins-reports/failure-report.html wh
Ilan:
That’s, unfortunately, not an easy question. Hoss’s rollups are here:
http://fucit.org/solr-jenkins-reports/failure-report.html which show the rates,
but not where they came from.
Here’s an example of a failure from Jenkins, if you follow the link you can see
the full output, (click “co
Where are the test failure details?
On Mon, May 25, 2020 at 4:47 PM Erick Erickson
wrote:
> Here’s the summary:
>
> Raw fail count by week totals, most recent week first (corresponds to
> bits):
> Week: 0 had 113 failures
> Week: 1 had 103 failures
> Week: 2 had 102 failures
> Week: 3 had
Here’s the summary:
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 113 failures
Week: 1 had 103 failures
Week: 2 had 102 failures
Week: 3 had 343 failures
Failures in Hoss' reports for the last 4 rollups.
There were 511 unannotated tests
Short form:
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 103 failures
Week: 1 had 102 failures
Week: 2 had 343 failures
Week: 3 had 86 failures
Failures in Hoss' reports for the last 4 rollups.
There were 493 unannotated tests that fai
Largely ignore the fact that weeks 0 and 1 had so many failures, that was due
to Jenkins running out of space, which bled over into the week0 report.
This is the first one that reports the number of SuppressWarnings annotations
that we can use as a baseline. If I start adding SuppressWarnings th
Phew! Thanks for digging Erick, and for producing these BadApple reports.
Mike McCandless
http://blog.mikemccandless.com
On Wed, May 6, 2020 at 7:59 AM Erick Erickson
wrote:
> OK, this morning things are back to normal. I think the disk space issue
> was to blame because checking after Mike’
OK, this morning things are back to normal. I think the disk space issue
was to blame because checking after Mike’s fix didn’t look like it
cured the problem.
Thanks all!
> On May 5, 2020, at 1:41 PM, Chris Hostetter wrote:
>
>
> : And FWIW, I beasted one of the failing suites last night _with
OK, thanks Chris.
The 24 hour rollup still shows many failures in the several classes, I’ll check
tomorrow
to see if that’s a consequence of the disk full problem.
> On May 5, 2020, at 1:41 PM, Chris Hostetter wrote:
>
>
> : And FWIW, I beasted one of the failing suites last night _without_
: And FWIW, I beasted one of the failing suites last night _without_
: Mike’s changes and didn’t get any failures so I can’t say anything about
: whether Mike’s changes helped or not.
IIUC McCandless's failure only affects you if you use the "jenkins" test
data file (the really big wikipedia d
Uwe
>
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> https://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>> -Original Message-
>> From: Erick Erickson
>> Sent: Monday, May 4, 2020 1:54 PM
>> To: dev@lucene.apache.org
>> Subject: P
://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Erick Erickson
> Sent: Monday, May 4, 2020 1:54 PM
> To: dev@lucene.apache.org
> Subject: PLEASE READ! BadApple report. Last week was horrible!
>
> I don’t know whether we had some temporary glit
Mike:
I saw the push. Hoss’ rollups go for “the last 24 hours”, so it’ll be Tuesday
evening before things have had a chance to work their way through, I’ll look
tomorrow.
Meanwhile I’m beasting one of the failing test suites (without the change) and
280 iterations so far and no failures. That
Hi Erick,
OK I pushed a fix! See if it decreases the failure rate for those newly
bad apples?
Sorry and thanks :)
Mike McCandless
http://blog.mikemccandless.com
On Mon, May 4, 2020 at 1:06 PM Erick Erickson
wrote:
> Mike:
>
> I have no idea. Hoss’ rollups don’t link back to builds, they
>
Mike:
I have no idea. Hoss’ rollups don’t link back to builds, they
just aggregate the results.
Not a huge deal if it’s something like this of course. Let’s just
say I’ve had my share or “moments” ;).
And unfortunately, the test failures are pretty rare on a
percentage basis, so it’s hard to te
Hi Erick,
It's possible this was the root cause of many of the failures:
https://issues.apache.org/jira/browse/LUCENE-9191
Do these transient failures look something like this?
[junit4]> Throwable #1:
java.nio.charset.MalformedInputException: Input length = 1
[junit4]>at
__
I don’t know whether we had some temporary glitch that broke lots of tests and
they’ve been fixed or we had a major regression, but this needs to be addressed
ASAP if they’re still failing. See everything below the line "ALL OF THE TESTS
BELOW HERE HAVE ONLY FAILED IN THE LAST WEEK!” in this e-m
Kevin: The good news is that no SyncSliceTest failures in the last week, cool!
Number of AwaitsFix: 42 Number of BadApples: 4
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 86 failures
Week: 1 had 78 failures
Week: 2 had 117 failures
Week: 3 had
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 78 failures
Week: 1 had 117 failures
Week: 2 had 99 failures
Week: 3 had 69 failures
Failures in Hoss' reports for the last 4 rollups.
There were 243 unannotated tests that failed in Hoss'
>
> 0123 59.4 195 92 HdfsSyncSliceTest.test
I'm looking into this HdfsSyncSliceTest failure. Jira
https://issues.apache.org/jira/browse/SOLR-13886
Kevin Risden
Kevin Risden
On Mon, Apr 13, 2020 at 8:35 AM Erick Erickson
wrote:
> We’re backsliding a bit. Note that over the las
We’re backsliding a bit. Note that over the last two weeks we’ve had
successively more failures, HdfsSyncSliceTest is failing over half the time!
Can we just nuke it?
Here’s the short form
aw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 117 failures
Wee
Short form:
We had a slight uptick in failures last week, root cause unknown.
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 99 failures
Week: 1 had 69 failures
Week: 2 had 65 failures
Week: 3 had 129 failures
Failures in Hoss' reports f
There are a couple of tests that can have BadApple removed,
MultiThreadedOCPTest.test
SolrZkClientTest.testSimpleUpdateACLs
I’ll take care of those today or tomorrow.
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 69 failures
Week: 1 had 65 fai
Short form:
There were 287 unannotated tests that failed in Hoss' rollups. Ordered by the
date I downloaded the rollup file, newest->oldest. See above for the dates the
files were collected
These tests were NOT BadApple'd or AwaitsFix'd
Failures in the last 4 reports..
Report Pct run
I was on vacation the last couple of weeks so missed the BadApple reports.
Full results attached
Failures in Hoss' reports for the last 4 rollups.
There were 373 unannotated tests that failed in Hoss' rollups. Ordered by the
date I downloaded the rollup file, newest->oldest. See above f
Attached.
Short form:
**Haven't failed in the last 4 rollups.
**Methods: 2
MultiThreadedOCPTest.test
SolrZkClientTest.testSimpleUpdateACLs
Failures in Hoss' reports for the last 4 rollups.
There were 292 unannotated tests that failed in Hoss' rollups. Ordered
by the date I d
Holding reasonable steady in terms of failures every week for the last 4:
Failures in the last 4 reports..
Report Pct runsfails test
0123 2.4 1694 49 BasicDistributedZkTest.test
0123 0.2 1645 5 ExecutePlanActionTest.testTaskTimeout
Won’t add annotations. Here’s the failures in the last 4 runs:
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 114 failures
Week: 1 had 125 failures
Week: 2 had 191 failures
Week: 3 had 118 failures
Failures in the last 4 reports..
Report Pct
Failures in each of the last 4 reports..
Report Pct runsfails test
0123 0.3 1384 11 AutoScalingHandlerTest.testReadApi
0123 0.3 1402 8 HttpPartitionTest.test
0123 0.3 1393 11 HttpPartitionWithTlogReplicasTest.test
I’m not actively annotating anything at this point, the number of failed tests
over each of the last 4 weeks is short enough that I’ll just echo those in
these e-mails, the full report is attached for anyone who wants to track
history. I’ll revise the wording to not make it look like I’ll annota
Will do. Actually, won’t do (disable that is)…. One of the things that’s kind
of a pain is that the report doesn’t distinguish between different JVMs so
there’s no really convenient way to ignore this kind of thing.
Anyway, I’ve put both of them in my list, and I have to say I’m not actively
an
Same goes for TestPackedInts. Currently test runs containing ZGC or
Shenandoah garbage collectors don't reflect the test itself. Please don't
disable them.
On Mon, Jan 6, 2020 at 12:38 PM Robert Muir wrote:
> We shouldn't disable Test2BPostings since there is nothing wrong with the
> test: this
We shouldn't disable Test2BPostings since there is nothing wrong with the
test: this is one impacted by bugs in the Shenandoah and ZGC garbage
collectors. See the other threads on the dev-list about them.
On Mon, Jan 6, 2020 at 10:47 AM Erick Erickson
wrote:
> Short form:
>
> There were 1480 una
Short form:
There were 1480 unannotated tests that failed in Hoss' rollups. Ordered by the
date I downloaded the rollup file, newest->oldest. See above for the dates the
files were collected
These tests were NOT BadApple'd or AwaitsFix'd
All tests that failed 4 weeks running will be BadApple'd
As all the security stuff settles down, I’m still taking these snapshots but
mostly to keep a complete record. The longer records, i.e. for the last 7 days
contains a lot of noise comparatively.
That said, it’s worth looking at Hoss’ last 7 day rollup, we do have a number
of tests failing quite
Short form:
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 83 failures
Week: 1 had 253 failures
Week: 2 had 56 failures
Week: 3 had 66 failures
Failures in the last 4 reports..
Report Pct runsfails test
0123 16.7
This is not a good week at all:
Raw fail count by week totals, most recent week first (corresponds to bits):
Week: 0 had 253 failures Most recent 7 days
Week: 1 had 56 failures 7 days before that
Week: 2 had 66 failures
Week: 3 had 83 failures
Going from 56 failures to 253 is A Very Ba
MoveReplicaHDFSTest.test
LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud
TestModelManagerPersistence
all fail more than 10%, MoveReplicaHDFSTest 50%.
BasicAuthIntegrationTest.testBasicAuth comes in at just under 10%.
Short form:
There were 147 unannotated tests that failed in Ho
It’s been a while. I think this is mostly informational. I was all excited when
the reports were getting s much better, but that was an artifact of some
test environments not being up and running.
When Mark’s test work hits, we’ll probably have to start over.
That said, people SHOULD LOOK H
I’m going to suspend these until we build up a better backlog of tests since a
number of machines weren’t being collected by Hoss’ rollups. I’ll continue to
gather the rollups every week, but for a while I don’t think it’s worth
cluttering your inbox.
I’ll probably just continue to gather Hoss’ rollups each week, but until we get
the jenkins stuff back running it’s probably not worth the effort.
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands
No annotation changes will happen this week.
Summary:
Processing file (History bit 3): HOSS-2019-088-05.csv
Processing file (History bit 2): HOSS-2019-08-19.csv
Processing file (History bit 1): HOSS-2019-08-12.csv
Processing file (History bit 0): HOSS-2019-07-29.csv
Number of AwaitsFix: 38 Numbe
Continued improvement I think. Or at least the improvements 3 weeks ago are
working their way through the system. Note that the number of tests that _only_
failed three weeks ago is almost half the total. So I have some optimism that
next week we’ll see a further large drop.
Here’s the synopsi
Interestingly, the numbers of failed test has gone down pretty radically over
the last while. I skipped about 4 weeks of collecting the reports while moving,
but if I compare the tests that failed during the last two weeks in the rollup
from July 1 with the the last two weeks sollected today, th
Here it is after a hiatus. I have moved from California to South Orange, NJ…
it’s a long story why. But I’ll be glad to tell y’all about driving a Chevy
Bolt EV across country and how Wyoming has very few commercial charging
options… But I did get to see Old Faithful erupt…
Any, I won’t make an
HdfsAutoAddReplicasIntegrationTest.testSimple
I am going to awaitsfix this test -
https://issues.apache.org/jira/browse/SOLR-13338. I haven't had time to
look into recent failures. I thought the Jetty upgrade would have helped.
It had very similar timeout waiting exception.
Kevin Risden
On Mon,
Pretty steady, I won’t be doing anything with annotations this week:
**Annotations will be removed from the following tests because they haven't
failed in the last 4 rollups.
**Methods: 3
FullSolrCloudDistribCmdsTest.test
MultiThreadedOCPTest.test
SolrZkClientTest.testSimpleUpdateACL
I won’t change annotations again this week. Here’s the short from:
**Annotations will be removed from the following tests because they haven't
failed in the last 4 rollups.
**Methods: 2
FullSolrCloudDistribCmdsTest.test
SolrZkClientTest.testSimpleUpdateACLs
Failures in Hoss'
Holding pretty steady, won’t remove annotations just yet. Full report attached.
I _strongly_ urge people to take a quick glance at:
http://fucit.org/solr-jenkins-reports/failure-report.html regularly. There are
5 tests that are failing 25% of the time or more currently.
——Report
**Annotation
I probably won’t remove the annotations indicated this week, kinda busy.
Overall looks like we’re getting gradually better.
Full report attached:
**Annotations will be removed from the following tests because they haven't
failed in the last 4 rollups.
**Methods: 3
FullSolrCloudDistribCmd
things are settled down quite a bit. So ongoing I’ll publish this each week,
but will only periodically change the annotations.
If/when we stop running 7x Jenkins jobs, I may start annotating with BadApple
again, we’ll see.
Meanwhile I’ll post the list of new test failures over the last 4 weeks
Well, I didn't add stuff last week, slipped through the cracks.
Anyway, here's the current list. NOTE: lots more tests are being
un-annotated than annotated, which is good.
Also, this last report has 421 total tests that failed sometime in the
last 4 weeks. The report before had 655. Still quite
Well, I missed two weeks in a row. So sue me ;). This week fer sure
Here's the condensed report. Let me know if there are any issues. Full
report attached.
DO NOT ENABLE LIST:
'TestControlledRealTimeReopenThread.testCRTReopen'
'TestICUNormalizer2CharFilter.testRandomStrings'
'Test
This is a pretty bad week. 60+ tests to be annotated and only 4 to be
un-annotated. Here's the culled list, full report attached.
**Annotations will be removed from the following tests because they
haven't failed in the last 4 rollups.
**Methods: 4
MoveReplicaHDFSTest.testNormalFailedMove
Hi Erick,
Le lun. 10 sept. 2018 à 20:06, Erick Erickson a
écrit :
> First, I have these two lists, are they still current?
>
> DO NOT ENABLE LIST:
> 'TestControlledRealTimeReopenThread.testCRTReopen'
> 'TestICUNormalizer2CharFilter.testRandomStrings'
> 'TestICUTokenizerCJK'
>
+1 to
First, I have these two lists, are they still current?
DO NOT ENABLE LIST:
'TestControlledRealTimeReopenThread.testCRTReopen'
'TestICUNormalizer2CharFilter.testRandomStrings'
'TestICUTokenizerCJK'
'TestImpersonationWithHadoopAuth.testForwarding'
'TestLTRReRankingPipeline.testDi
Sure, won't BadApple TestWithCollection.
On Mon, Aug 27, 2018 at 10:01 PM Shalin Shekhar Mangar
wrote:
>
> Thanks Erick. I'm working on fixing TestWithCollection so please do not
> BadApple it this week.
>
> On Tue, Aug 28, 2018 at 1:04 AM Erick Erickson
> wrote:
>>
>> On the plus side, the CD
Thanks Erick. I'm working on fixing TestWithCollection so please do not
BadApple it this week.
On Tue, Aug 28, 2018 at 1:04 AM Erick Erickson
wrote:
> On the plus side, the CDCR tests (except BiDir) seem to be fixed.
>
> Also on the plus side, there are quite a number of tests that have
> _not_
On the plus side, the CDCR tests (except BiDir) seem to be fixed.
Also on the plus side, there are quite a number of tests that have
_not_ failed in the last 4 weeks and I'll un-annotate.
On the minus side, TestPolicy has 39 tests that have failed at least
once in the last 4 weeks. I'll beast thi
**Annotated tests/suites that didn't fail in the last 4 weeks.
**Annotations will be removed from the following tests because they
haven't failed in the last 4 rollups.
**Methods: 8
BasicAuthIntegrationTest.testBasicAuth
CollectionsAPIAsyncDistributedZkTest.testAsyncRequests
MoveRepl
I still think it’s a mistake to try and use all the Jenkins results to
drive ignoring tests. It needs to be an objective measure in a good env.
We also should not be ignoring tests in mass.l without individual
consideration. Critical test coverage should be treated differently than
any random test
Alexandre:
Feel free! What I'm struggling with is not that someone checked in
some code that all the sudden started breaking things. Rather that a
test that's been working perfectly will fail once the won't
reproducibly fail again and does _not_ appear to be related to recent
code changes.
In fac
1 - 100 of 143 matches
Mail list logo