Hi everyone! I followed this guide
https://dev.to/mvillarrealb/creating-a-spark-standalone-cluster-with-docker-and-docker-compose-2021-update-6l4
to create a Spark cluster on an Ubuntu server with Docker. However, when I
try to submit my PySpark code to the master, the jobs are registered in the
Spark UI but I encounter an error when checking the worker:

24/01/31 09:04:35 ERROR Inbox: Ignoring error
java.io.EOFException
        at java.base/java.io.DataInputStream.readFully(Unknown Source)
        at java.base/java.io.DataInputStream.readUTF(Unknown Source)
        at java.base/java.io.DataInputStream.readUTF(Unknown Source)
        at 
org.apache.spark.scheduler.TaskDescription$.deserializeStringLongMap(TaskDescription.scala:138)
        at 
org.apache.spark.scheduler.TaskDescription$.decode(TaskDescription.scala:178)
        at 
org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$receive$1.applyOrElse(CoarseGrainedExecutorBackend.scala:185)
        at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:115)
        at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
        at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
        at 
org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
        at 
org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source)
        at java.base/java.lang.Thread.run(Unknown Source)


Could you please help me?  What should I do?

Reply via email to