This one line:
[error] Test org.apache.daffodil.example.TestScalaAPI.testScalaAPI2 failed:
expected:<0> but was:<1>, took 0.307 sec
For that test to fail an assertEquals, but only on one platform,... and it is
not reproducible. Is very disconcerting.
The test has exactly 3 assertEquals that compare against 0.
@Test
def testScalaAPI2(): Unit = {
val lw = new LogWriterForSAPITest()
Daffodil.setLogWriter(lw)
Daffodil.setLoggingLevel(LogLevel.Info)
...
val res = dp.parse(input, outputter)
...
assertEquals(0, lw.errors.size)
assertEquals(0, lw.warnings.size)
assertEquals(0, lw.others.size)
// reset the global logging state
Daffodil.setLogWriter(new ConsoleLogWriter())
Daffodil.setLoggingLevel(LogLevel.Info)
}
So this test is failing sporadically because of something being written to the
logWriter (lw) that wasn't before.
________________________________
From: Interrante, John A (GE Research, US) <[email protected]>
Sent: Tuesday, April 27, 2021 2:47 PM
To: [email protected] <[email protected]>
Subject: flakey windows CI build? Or real issue?
Once you drill down into and expand the "Run Unit Tests" log, GitHub lets you
search that log with a magnifying lens icon and input search text box above the
log. Searching for "failed:" makes it easier to find the specific failures. I
found one failure and three warnings:
[error] Test org.apache.daffodil.example.TestScalaAPI.testScalaAPI2 failed:
expected:<0> but was:<1>, took 0.307 sec
[warn] Test assumption in test
org.apache.daffodil.usertests.TestSepTests.test_sep_ssp_never_1 failed:
org.junit.AssumptionViolatedException: (Implementation: daffodil) Test
'test_sep_ssp_never_1' not compatible with implementation., took 0.033 sec
[warn] Test assumption in test
org.apache.daffodil.usertests.TestSepTests.test_sep_ssp_never_3 failed:
org.junit.AssumptionViolatedException: (Implementation: daffodil) Test
'test_sep_ssp_never_3' not compatible with implementation., took 0.005 sec
[warn] Test assumption in test
org.apache.daffodil.usertests.TestSepTests.test_sep_ssp_never_4 failed:
org.junit.AssumptionViolatedException: (Implementation: daffodil) Test
'test_sep_ssp_never_4' not compatible with implementation., took 0.003 sec
Your previous run failed in the Windows Java 11 build's Compile step with a
http 504 error when sbt was trying to fetch artifacts:
[error]
lmcoursier.internal.shaded.coursier.error.FetchError$DownloadingArtifacts:
Error fetching artifacts:
[error]
https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.typesafe.sbt/sbt-native-packager/scala_2.12/sbt_1.0/1.8.1/jars/sbt-native-packager.jar:
download error: Caught java.io.IOException: Server returned HTTP response
code: 504 for URL:
https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.typesafe.sbt/sbt-native-packager/scala_2.12/sbt_1.0/1.8.1/jars/sbt-native-packager.jar
(Server returned HTTP response code: 504 for URL:
https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.typesafe.sbt/sbt-native-packager/scala_2.12/sbt_1.0/1.8.1/jars/sbt-native-packager.jar)
while downloading
https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.typesafe.sbt/sbt-native-packager/scala_2.12/sbt_1.0/1.8.1/jars/sbt-native-packager.jar
That error probably is just a flaky network or server problem.
John
-----Original Message-----
From: Steve Lawrence <[email protected]>
Sent: Tuesday, April 27, 2021 2:17 PM
To: [email protected]
Subject: EXT: Re: flakey windows CI build? Or real issue?
I haven't seen failures in tests in a while, only thing I've noticed is GitHub
actions just stalling with no output.
In the linked PR, I see the error:
[error] Test org.apache.daffodil.example.TestScalaAPI.testScalaAPI2
failed: expected:<0> but was:<1>, took 0.307 sec
I wonder if these isAtEnd changes have introduced a race condition, or made an
existing race condition more likely to get hit?
On 4/27/21 2:13 PM, Beckerle, Mike wrote:
> My PR keeps failing to build on Windows e.g., This failed the windows
> java8 build:
> https://github.com/mbeckerle/daffodil/actions/runs/789865909
> <https://github.com/mbeckerle/daffodil/actions/runs/789865909>
>
> Previously today it failed the windows java 11 build.
>
> The errors were different. Earlier today it was in daffodil-io, the
> linked checks above it's in daffodil-sapi.
>
> In neither case is there an [error] identifying the specific test
> failing. Only a summary at the end indicating there were failures in that
> module.
>
> Is any of this expected behavior? I've seen mostly all 6 standard CI
> checks working of late on others' PRs.
>
>
> Mike Beckerle | Principal Engineer
>
> [email protected] <mailto:[email protected]>
>
> P +1-781-330-0412
>