LuciferYang commented on PR #49357:
URL: https://github.com/apache/spark/pull/49357#issuecomment-2573044009

   > ```
   >  Query with Trigger.AvailableNow should throw error when topic partitions 
got unavailable during subsequent batches *** FAILED ***
   >   java.lang.AssertionError: assertion failed: Exception tree doesn't 
contain the expected exception with message: Some of partitions in Kafka 
topic(s) have been lost during running query with Trigger.AvailableNow.
   > org.scalatest.exceptions.TestFailedException: isPropagated was false 
Partition [topic-40, 1] metadata not propagated after timeout
   >    at 
org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472)
   >    at 
org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:471)
   >    at 
org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1231)
   >    at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:1295)
   >    at 
org.apache.spark.sql.kafka010.KafkaTestUtils.$anonfun$waitUntilMetadataIsPropagated$1(KafkaTestUtils.scala:615)
   >    at 
org.scalatest.enablers.Retrying$$anon$4.makeAValiantAttempt$1(Retrying.scala:184)
   >    at 
org.scalatest.enablers.Retrying$$anon$4.tryTryAgain$2(Retrying.scala:196)
   >    at org.scalatest.enablers.Retrying$$anon$4.retry(Retrying.scala:226)
   >    at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:348)
   >    at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:347)
   >    at org.scalatest.concurrent.Eventually$.eventually(Eventually.scala:457)
   >    at 
org.apache.spark.sql.kafka010.KafkaTestUtils.waitUntilMetadataIsPropagated(KafkaTestUtils.scala:614)
   >    at 
org.apache.spark.sql.kafka010.KafkaTestUtils.$anonfun$createTopic$1(KafkaTestUtils.scala:379)
   >    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:190)
   >    at 
org.apache.spark.sql.kafka010.KafkaTestUtils.createTopic(KafkaTestUtils.scala:378)
   >    at 
org.apache.spark.sql.kafka010.KafkaMicroBatchSourceSuiteBase.$anonfun$new$11(KafkaMicroBatchSourceSuite.scala:351)
   >    at 
org.apache.spark.sql.kafka010.KafkaMicroBatchSourceSuiteBase.$anonfun$new$11$adapted(KafkaMicroBatchSourceSuite.scala:348)
   >    at 
org.apache.spark.sql.execution.streaming.sources.ForeachBatchSink.callBatchWriter(ForeachBatchSink.scala:54)
   >    at 
org.apache.spark.sql.execution.streaming.sources.ForeachBatchSink.addBatch(ForeachBatchSink.scala:47)
   >    at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$17(MicroBatchExecution.scala:869)
   >    at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$8(SQLExecution.scala:162)
   >    at 
org.apache.spark.sql.execution.SQLExecution$.withSessionTagsApplied(SQLExecution.scala:268)
   >    at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$7(SQLExecution.scala:124)
   >    at 
org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94)
   >    at 
org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:110)
   >    at 
org.apache.spark.sql.artifact.ArtifactManager.withClassLoaderIfNeeded(ArtifactManager.scala:104)
   >    at 
org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:109)
   >    at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$6(SQLExecution.scala:124)
   >    at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:291)
   >    at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:123)
   >    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:790)
   >    at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:77)
   >    at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:233)
   >    at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$16(MicroBatchExecution.scala:866)
   >    at 
org.apache.spark.sql.execution.streaming.ProgressContext.reportTimeTaken(ProgressReporter.scala:185)
   >    at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:866)
   >    at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$executeOneBatch$2(MicroBatchExecution.scala:387)
   >    at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
   >    at 
org.apache.spark.sql.execution.streaming.ProgressContext.reportTimeTaken(ProgressReporter.scala:185)
   >    at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.executeOneBatch(MicroBatchExecution.scala:357)
   >    at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:337)
   >    at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1$adapted(MicroBatchExecution.scala:337)
   >    at 
org.apache.spark.sql.execution.streaming.TriggerExecutor.runOneBatch(TriggerExecutor.scala:39)
   >    at 
org.apache.spark.sql.execution.streaming.TriggerExecutor.runOneBatch$(TriggerExecutor.scala:37)
   >    at 
org.apache.spark.sql.execution.streaming.MultiBatchExecutor.runOneBatch(TriggerExecutor.scala:59)
   >    at 
org.apache.spark.sql.execution.streaming.MultiBatchExecutor.execute(TriggerExecutor.scala:64)
   >    at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:337)
   >    at 
org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:337)
   >    at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
   >    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:790)
   >    at 
org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:311)
   >    at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:226)
   >   at scala.Predef$.assert(Predef.scala:279)
   >   at org.apache.spark.TestUtils$.assertExceptionMsg(TestUtils.scala:200)
   >   at 
org.apache.spark.sql.kafka010.KafkaMicroBatchSourceSuiteBase.$anonfun$new$9(KafkaMicroBatchSourceSuite.scala:373)
   >   at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
   >   at org.scalatest.enablers.Timed$$anon$1.timeoutAfter(Timed.scala:127)
   >   at 
org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:282)
   >   at org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231)
   >   at org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230)
   >   at org.apache.spark.SparkFunSuite.failAfter(SparkFunSuite.scala:69)
   >   at 
org.apache.spark.SparkFunSuite.$anonfun$test$2(SparkFunSuite.scala:155)
   >   ...
   > ```
   > 
   > This seems to be a known flaky test.
   
   I manually tested this flaky test locally, and it passed.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to