Github user tedyu commented on the issue:

    https://github.com/apache/spark/pull/21488
  
    ```
    22:36:05.028 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 
in stage 16314.0 (TID 39181, localhost, executor driver): 
java.io.FileNotFoundException: File 
file:/home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-0bbc239c-37c5-4df2-b86d-e9c7628ceb28/f1=1/f2=1/part-00000-390ac6da-50dc-4d32-ba08-462da1e8a0c2.c000.snappy.parquet
 does not exist
    It is possible the underlying files have been updated. You can explicitly 
invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in 
SQL or by recreating the Dataset/DataFrame involved.
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:131)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
        at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
    ```
    Doesn't seem to be related to PR.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to