[ 
https://issues.apache.org/jira/browse/SPARK-34563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Kamprath updated SPARK-34563:
-------------------------------------
    Description: 
I have some PySpark code that periodically checkpoints a data frame  that I am 
building in pieces by union-ing those pieces together as they are constructed. 
(Py)Spark fails on the second checkpoint, which would be a union of a new piece 
of the desired data frame with a previously checkpointed piece. Some simplified 
PySpark code that will trigger this problem is:

 
{code:java}
RANGE_STEP = 10000
PARTITIONS = 5
COUNT_UNIONS = 20

df = spark.range(1, RANGE_STEP+1, numPartitions=PARTITIONS)

for i in range(1, COUNT_UNIONS+1):
    print('Processing i = {0}'.format(i))
    new_df = spark.range(RANGE_STEP*i + 1, RANGE_STEP*(i+1) + 1, 
numPartitions=PARTITIONS)
    df = df.union(new_df).checkpoint()

df.count()
{code}
When this code gets to the checkpoint on the second loop iteration (i=2) the 
job fails with an error:

 
{code:java}
Py4JJavaError: An error occurred while calling o119.checkpoint.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in 
stage 10.0 failed 4 times, most recent failure: Lost task 9.3 in stage 10.0 
(TID 264, 10.20.30.13, executor 0): com.esotericsoftware.kryo.KryoException: 
Encountered unregistered class ID: 9062
        at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:137)
        at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:693)
        at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:804)
        at 
org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:296)
        at 
org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:168)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
        at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
        at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
        at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
        at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
        at 
org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2154)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:127)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:462)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:465)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
        at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007)
        at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973)
        at scala.Option.foreach(Option.scala:407)
        at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
        at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2114)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2135)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2154)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2179)
        at org.apache.spark.rdd.RDD.count(RDD.scala:1227)
        at org.apache.spark.sql.Dataset.$anonfun$checkpoint$1(Dataset.scala:696)
        at 
org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
        at org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:687)
        at org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:650)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)
Caused by: com.esotericsoftware.kryo.KryoException: Encountered unregistered 
class ID: 9062
        at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:137)
        at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:693)
        at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:804)
        at 
org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:296)
        at 
org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:168)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
        at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
        at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
        at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
        at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
        at 
org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2154)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:127)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:462)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:465)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        ... 1 more
{code}
 

Note that the checkpoint directory is set, as the first checkpoint does 
succeed.  Also, if the checkpoint method is removed, the sample code succeeds 
as expected, so the problems isolated to the use of the checkpoint.

 

  was:
I have some PySpark code that periodically checkpoints a data frame  that I am 
building in pieces by union-ing those pieces together as they are constructed. 
(Py)Spark fails on the second checkpoint, which would be a union of a new piece 
of the desired data frame with a previously checkpointed piece. Some simplified 
PySpark code that will trigger this problem is:

 
{code:java}
RANGE_STEP = 10000
PARTITIONS = 5
COUNT_UNIONS = 20

df = spark.range(1, RANGE_STEP+1, numPartitions=PARTITIONS)

for i in range(1, COUNT_UNIONS+1):
    print('Processing i = {0}'.format(i))
    new_df = spark.range(RANGE_STEP*i + 1, RANGE_STEP*(i+1) + 1, 
numPartitions=PARTITIONS)
    df = df.union(new_df).checkpoint()

df.count()
{code}
When this code gets to the checkpoint on the second loop iteration (i=2) the 
job fails with an error:

 
{code:java}
Py4JJavaError: An error occurred while calling o119.checkpoint.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in 
stage 10.0 failed 4 times, most recent failure: Lost task 9.3 in stage 10.0 
(TID 264, 10.20.30.13, executor 0): com.esotericsoftware.kryo.KryoException: 
Encountered unregistered class ID: 9062
        at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:137)
        at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:693)
        at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:804)
        at 
org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:296)
        at 
org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:168)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
        at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
        at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
        at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
        at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
        at 
org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2154)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:127)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:462)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:465)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
        at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007)
        at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973)
        at scala.Option.foreach(Option.scala:407)
        at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
        at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2114)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2135)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2154)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2179)
        at org.apache.spark.rdd.RDD.count(RDD.scala:1227)
        at org.apache.spark.sql.Dataset.$anonfun$checkpoint$1(Dataset.scala:696)
        at 
org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
        at org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:687)
        at org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:650)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)
Caused by: com.esotericsoftware.kryo.KryoException: Encountered unregistered 
class ID: 9062
        at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:137)
        at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:693)
        at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:804)
        at 
org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:296)
        at 
org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:168)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
        at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
        at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
        at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
        at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
        at 
org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2154)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:127)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:462)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:465)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        ... 1 more
{code}
 

Note that the checkpoint directory is set, as the first checkpoint does 
succeed. 

 


> Checkpointing a union with another checkpoint fails
> ---------------------------------------------------
>
>                 Key: SPARK-34563
>                 URL: https://issues.apache.org/jira/browse/SPARK-34563
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 3.0.2
>         Environment: I am running Spark 3.0.2 in stand alone cluster mode, 
> built for Hadoop 2.7, and Scala 2.12.12. I am using QFS 2.2.2 (Quantcast File 
> System) as the underlying DFS. The nodes run on Debian Stretch, and Java is 
> openjdk version "1.8.0_275". 
>            Reporter: Michael Kamprath
>            Priority: Major
>
> I have some PySpark code that periodically checkpoints a data frame  that I 
> am building in pieces by union-ing those pieces together as they are 
> constructed. (Py)Spark fails on the second checkpoint, which would be a union 
> of a new piece of the desired data frame with a previously checkpointed 
> piece. Some simplified PySpark code that will trigger this problem is:
>  
> {code:java}
> RANGE_STEP = 10000
> PARTITIONS = 5
> COUNT_UNIONS = 20
> df = spark.range(1, RANGE_STEP+1, numPartitions=PARTITIONS)
> for i in range(1, COUNT_UNIONS+1):
>     print('Processing i = {0}'.format(i))
>     new_df = spark.range(RANGE_STEP*i + 1, RANGE_STEP*(i+1) + 1, 
> numPartitions=PARTITIONS)
>     df = df.union(new_df).checkpoint()
> df.count()
> {code}
> When this code gets to the checkpoint on the second loop iteration (i=2) the 
> job fails with an error:
>  
> {code:java}
> Py4JJavaError: An error occurred while calling o119.checkpoint.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 
> in stage 10.0 failed 4 times, most recent failure: Lost task 9.3 in stage 
> 10.0 (TID 264, 10.20.30.13, executor 0): 
> com.esotericsoftware.kryo.KryoException: Encountered unregistered class ID: 
> 9062
>       at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:137)
>       at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:693)
>       at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:804)
>       at 
> org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:296)
>       at 
> org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:168)
>       at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
>  Source)
>       at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>       at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
>       at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
>       at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
>       at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
>       at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
>       at 
> org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2154)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>       at org.apache.spark.scheduler.Task.run(Task.scala:127)
>       at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:462)
>       at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:465)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Driver stacktrace:
>       at 
> org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)
>       at 
> org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)
>       at 
> org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007)
>       at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
>       at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
>       at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
>       at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)
>       at 
> org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973)
>       at 
> org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973)
>       at scala.Option.foreach(Option.scala:407)
>       at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)
>       at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
>       at 
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2114)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2135)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2154)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2179)
>       at org.apache.spark.rdd.RDD.count(RDD.scala:1227)
>       at org.apache.spark.sql.Dataset.$anonfun$checkpoint$1(Dataset.scala:696)
>       at 
> org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
>       at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
>       at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
>       at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
>       at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
>       at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
>       at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
>       at org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:687)
>       at org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:650)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
>       at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
>       at py4j.Gateway.invoke(Gateway.java:282)
>       at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
>       at py4j.commands.CallCommand.execute(CallCommand.java:79)
>       at py4j.GatewayConnection.run(GatewayConnection.java:238)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: com.esotericsoftware.kryo.KryoException: Encountered unregistered 
> class ID: 9062
>       at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:137)
>       at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:693)
>       at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:804)
>       at 
> org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:296)
>       at 
> org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:168)
>       at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
>  Source)
>       at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>       at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
>       at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
>       at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
>       at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
>       at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
>       at 
> org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2154)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>       at org.apache.spark.scheduler.Task.run(Task.scala:127)
>       at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:462)
>       at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:465)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       ... 1 more
> {code}
>  
> Note that the checkpoint directory is set, as the first checkpoint does 
> succeed.  Also, if the checkpoint method is removed, the sample code succeeds 
> as expected, so the problems isolated to the use of the checkpoint.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to