Hi, When I run the same word count program from the command line and the IDE I get two different logging messages. The command line execution even says BUILD failed when the outcome of the program is success with the production of the directory word count and the resulting output file. I have pasted the two outputs if anybody knows what is going on ?
*IDE OUTPUT * WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/home/zahid/.m2/repository/org/apache/spark/spark-unsafe_2.12/2.4.5/spark-unsafe_2.12-2.4.5.jar) to method java.nio.Bits.unaligned() WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 20/03/01 13:09:47 INFO SparkContext: Running Spark version 2.4.5 20/03/01 13:09:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 20/03/01 13:09:47 INFO SparkContext: Submitted application: Word Count 20/03/01 13:09:47 INFO SecurityManager: Changing view acls to: zahid 20/03/01 13:09:47 INFO SecurityManager: Changing modify acls to: zahid 20/03/01 13:09:47 INFO SecurityManager: Changing view acls groups to: 20/03/01 13:09:47 INFO SecurityManager: Changing modify acls groups to: 20/03/01 13:09:47 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(zahid); groups with view permissions: Set(); users with modify permissions: Set(zahid); groups with modify permissions: Set() 20/03/01 13:09:47 INFO Utils: Successfully started service 'sparkDriver' on port 44285. 20/03/01 13:09:47 INFO SparkEnv: Registering MapOutputTracker 20/03/01 13:09:47 INFO SparkEnv: Registering BlockManagerMaster 20/03/01 13:09:47 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 20/03/01 13:09:47 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 20/03/01 13:09:47 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-fa6e3f69-4a94-45d0-bf6f-d25d3fff4a9b 20/03/01 13:09:47 INFO MemoryStore: MemoryStore started with capacity 987.6 MB 20/03/01 13:09:47 INFO SparkEnv: Registering OutputCommitCoordinator 20/03/01 13:09:47 INFO Utils: Successfully started service 'SparkUI' on port 4040. 20/03/01 13:09:47 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.0.42:4040 20/03/01 13:09:47 INFO Executor: Starting executor ID driver on host localhost 20/03/01 13:09:47 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41071. 20/03/01 13:09:47 INFO NettyBlockTransferService: Server created on 192.168.0.42:41071 20/03/01 13:09:47 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 20/03/01 13:09:47 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.0.42, 41071, None) 20/03/01 13:09:47 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.0.42:41071 with 987.6 MB RAM, BlockManagerId(driver, 192.168.0.42, 41071, None) 20/03/01 13:09:47 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.0.42, 41071, None) 20/03/01 13:09:47 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.0.42, 41071, None) 20/03/01 13:09:48 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 107.2 KB, free 987.5 MB) 20/03/01 13:09:48 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.4 KB, free 987.5 MB) 20/03/01 13:09:48 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.0.42:41071 (size: 20.4 KB, free: 987.6 MB) 20/03/01 13:09:48 INFO SparkContext: Created broadcast 0 from textFile at SparkExampleRDD.java:41 20/03/01 13:09:48 INFO FileInputFormat: Total input paths to process : 1 20/03/01 13:09:48 INFO deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 20/03/01 13:09:48 INFO HadoopMapRedCommitProtocol: Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 20/03/01 13:09:48 INFO SparkContext: Starting job: runJob at SparkHadoopWriter.scala:78 20/03/01 13:09:48 INFO DAGScheduler: Registering RDD 3 (mapToPair at SparkExampleRDD.java:48) as input to shuffle 0 20/03/01 13:09:48 INFO DAGScheduler: Got job 0 (runJob at SparkHadoopWriter.scala:78) with 1 output partitions 20/03/01 13:09:48 INFO DAGScheduler: Final stage: ResultStage 1 (runJob at SparkHadoopWriter.scala:78) 20/03/01 13:09:48 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0) 20/03/01 13:09:48 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0) 20/03/01 13:09:48 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at mapToPair at SparkExampleRDD.java:48), which has no missing parents 20/03/01 13:09:48 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 6.8 KB, free 987.5 MB) 20/03/01 13:09:48 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 3.6 KB, free 987.5 MB) 20/03/01 13:09:48 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.0.42:41071 (size: 3.6 KB, free: 987.6 MB) 20/03/01 13:09:48 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1163 20/03/01 13:09:48 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at mapToPair at SparkExampleRDD.java:48) (first 15 tasks are for partitions Vector(0)) 20/03/01 13:09:48 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks 20/03/01 13:09:48 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 7384 bytes) 20/03/01 13:09:48 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 20/03/01 13:09:48 INFO HadoopRDD: Input split: file:/home/zahid/NetBeansProjects/apachesparks/input.txt:0+485 20/03/01 13:09:48 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1116 bytes result sent to driver 20/03/01 13:09:48 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 121 ms on localhost (executor driver) (1/1) 20/03/01 13:09:48 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 20/03/01 13:09:48 INFO DAGScheduler: ShuffleMapStage 0 (mapToPair at SparkExampleRDD.java:48) finished in 0.187 s 20/03/01 13:09:48 INFO DAGScheduler: looking for newly runnable stages 20/03/01 13:09:48 INFO DAGScheduler: running: Set() 20/03/01 13:09:48 INFO DAGScheduler: waiting: Set(ResultStage 1) 20/03/01 13:09:48 INFO DAGScheduler: failed: Set() 20/03/01 13:09:48 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[5] at saveAsTextFile at SparkExampleRDD.java:51), which has no missing parents 20/03/01 13:09:48 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 67.1 KB, free 987.4 MB) 20/03/01 13:09:48 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 24.5 KB, free 987.4 MB) 20/03/01 13:09:48 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.0.42:41071 (size: 24.5 KB, free: 987.6 MB) 20/03/01 13:09:48 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1163 20/03/01 13:09:48 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[5] at saveAsTextFile at SparkExampleRDD.java:51) (first 15 tasks are for partitions Vector(0)) 20/03/01 13:09:48 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks 20/03/01 13:09:48 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, ANY, 7141 bytes) 20/03/01 13:09:48 INFO Executor: Running task 0.0 in stage 1.0 (TID 1) 20/03/01 13:09:48 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks including 1 local blocks and 0 remote blocks 20/03/01 13:09:48 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 192.168.0.42:41071 in memory (size: 3.6 KB, free: 987.6 MB) 20/03/01 13:09:48 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 5 ms 20/03/01 13:09:48 INFO HadoopMapRedCommitProtocol: Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 20/03/01 13:09:48 INFO FileOutputCommitter: Saved output of task 'attempt_20200301130948_0005_m_000000_0' to file:/home/zahid/NetBeansProjects/apachesparks/Word Count/_temporary/0/task_20200301130948_0005_m_000000 20/03/01 13:09:48 INFO SparkHadoopMapRedUtil: attempt_20200301130948_0005_m_000000_0: Committed 20/03/01 13:09:48 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1508 bytes result sent to driver 20/03/01 13:09:48 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 92 ms on localhost (executor driver) (1/1) 20/03/01 13:09:48 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 20/03/01 13:09:48 INFO DAGScheduler: ResultStage 1 (runJob at SparkHadoopWriter.scala:78) finished in 0.107 s 20/03/01 13:09:48 INFO DAGScheduler: Job 0 finished: runJob at SparkHadoopWriter.scala:78, took 0.357721 s 20/03/01 13:09:48 INFO SparkHadoopWriter: Job job_20200301130948_0005 committed. 20/03/01 13:09:48 INFO SparkContext: Invoking stop() from shutdown hook 20/03/01 13:09:48 INFO SparkUI: Stopped Spark web UI at http://192.168.0.42:4040 20/03/01 13:09:48 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 20/03/01 13:09:48 INFO MemoryStore: MemoryStore cleared 20/03/01 13:09:48 INFO BlockManager: BlockManager stopped 20/03/01 13:09:48 INFO BlockManagerMaster: BlockManagerMaster stopped 20/03/01 13:09:48 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 20/03/01 13:09:48 INFO SparkContext: Successfully stopped SparkContext 20/03/01 13:09:48 INFO ShutdownHookManager: Shutdown hook called 20/03/01 13:09:48 INFO ShutdownHookManager: Deleting directory /tmp/spark-c6a02c06-1b85-4304-b273-6c3785726d06 ------------------------------------------------------------------------ BUILD SUCCESS ------------------------------------------------------------------------ Total time: 3.286 s Finished at: 2020-03-01T13:09:48+00:00 Final Memory: 15M/60M *COMMAND LINE OUTPUT* zahid@192:~/NetBeansProjects/SparkSimpleApp> mvn exec:java -Dexec.mainClass=co.uk.backbutton.sparksimpleapp.SimpleApp [INFO] Scanning for projects... [INFO] [INFO] ------------------< co.uk.backbutton:SparkSimpleApp >------------------- [INFO] Building Simple Project 1.0-SNAPSHOT [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- exec-maven-plugin:1.6.0:java (default-cli) @ SparkSimpleApp --- WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/home/zahid/.m2/repository/org/apache/spark/spark-unsafe_2.11/2.4.4/spark-unsafe_2.11-2.4.4.jar) to method java.nio.Bits.unaligned() WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 20/03/01 13:06:57 INFO SparkContext: Running Spark version 2.4.4 20/03/01 13:06:57 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 20/03/01 13:06:57 INFO SparkContext: Submitted application: Simple Application 20/03/01 13:06:57 INFO SecurityManager: Changing view acls to: zahid 20/03/01 13:06:57 INFO SecurityManager: Changing modify acls to: zahid 20/03/01 13:06:57 INFO SecurityManager: Changing view acls groups to: 20/03/01 13:06:57 INFO SecurityManager: Changing modify acls groups to: 20/03/01 13:06:57 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(zahid); groups with view permissions: Set(); users with modify permissions: Set(zahid); groups with modify permissions: Set() 20/03/01 13:06:57 INFO Utils: Successfully started service 'sparkDriver' on port 46539. 20/03/01 13:06:57 INFO SparkEnv: Registering MapOutputTracker 20/03/01 13:06:57 INFO SparkEnv: Registering BlockManagerMaster 20/03/01 13:06:57 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 20/03/01 13:06:57 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 20/03/01 13:06:57 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-67828a96-8e35-4f62-bc47-411759a66fca 20/03/01 13:06:57 INFO MemoryStore: MemoryStore started with capacity 987.6 MB 20/03/01 13:06:57 INFO SparkEnv: Registering OutputCommitCoordinator 20/03/01 13:06:57 INFO Utils: Successfully started service 'SparkUI' on port 4040. 20/03/01 13:06:57 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.0.42:4040 20/03/01 13:06:57 INFO Executor: Starting executor ID driver on host localhost 20/03/01 13:06:57 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44927. 20/03/01 13:06:57 INFO NettyBlockTransferService: Server created on 192.168.0.42:44927 20/03/01 13:06:57 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 20/03/01 13:06:57 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.0.42, 44927, None) 20/03/01 13:06:57 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.0.42:44927 with 987.6 MB RAM, BlockManagerId(driver, 192.168.0.42, 44927, None) 20/03/01 13:06:57 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.0.42, 44927, None) 20/03/01 13:06:57 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.0.42, 44927, None) 20/03/01 13:06:58 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 107.2 KB, free 987.5 MB) 20/03/01 13:06:58 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.4 KB, free 987.5 MB) 20/03/01 13:06:58 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.0.42:44927 (size: 20.4 KB, free: 987.6 MB) 20/03/01 13:06:58 INFO SparkContext: Created broadcast 0 from textFile at SimpleApp.java:31 20/03/01 13:06:58 INFO FileInputFormat: Total input paths to process : 1 20/03/01 13:06:58 INFO SparkContext: Starting job: count at SimpleApp.java:38 20/03/01 13:06:58 INFO DAGScheduler: Got job 0 (count at SimpleApp.java:38) with 1 output partitions 20/03/01 13:06:58 INFO DAGScheduler: Final stage: ResultStage 0 (count at SimpleApp.java:38) 20/03/01 13:06:58 INFO DAGScheduler: Parents of final stage: List() 20/03/01 13:06:58 INFO DAGScheduler: Missing parents: List() 20/03/01 13:06:58 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at filter at SimpleApp.java:38), which has no missing parents 20/03/01 13:06:58 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.2 KB, free 987.5 MB) 20/03/01 13:06:58 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.5 KB, free 987.5 MB) 20/03/01 13:06:58 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.0.42:44927 (size: 2.5 KB, free: 987.6 MB) 20/03/01 13:06:58 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1161 20/03/01 13:06:58 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at filter at SimpleApp.java:38) (first 15 tasks are for partitions Vector(0)) 20/03/01 13:06:58 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks 20/03/01 13:06:58 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 7927 bytes) 20/03/01 13:06:58 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 20/03/01 13:06:58 INFO HadoopRDD: Input split: file:/home/zahid/spark/spark-3.0.0-preview2-bin-hadoop2.7/README.md:0+4666 20/03/01 13:06:58 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 875 bytes result sent to driver 20/03/01 13:06:58 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 69 ms on localhost (executor driver) (1/1) 20/03/01 13:06:58 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 20/03/01 13:06:58 INFO DAGScheduler: ResultStage 0 (count at SimpleApp.java:38) finished in 0.125 s 20/03/01 13:06:58 INFO DAGScheduler: Job 0 finished: count at SimpleApp.java:38, took 0.149096 s 20/03/01 13:06:58 INFO SparkContext: Starting job: count at SimpleApp.java:39 20/03/01 13:06:58 INFO DAGScheduler: Got job 1 (count at SimpleApp.java:39) with 1 output partitions 20/03/01 13:06:58 INFO DAGScheduler: Final stage: ResultStage 1 (count at SimpleApp.java:39) 20/03/01 13:06:58 INFO DAGScheduler: Parents of final stage: List() 20/03/01 13:06:58 INFO DAGScheduler: Missing parents: List() 20/03/01 13:06:58 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[3] at filter at SimpleApp.java:39), which has no missing parents 20/03/01 13:06:58 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 4.2 KB, free 987.5 MB) 20/03/01 13:06:58 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.5 KB, free 987.5 MB) 20/03/01 13:06:58 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.0.42:44927 (size: 2.5 KB, free: 987.6 MB) 20/03/01 13:06:58 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1161 20/03/01 13:06:58 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[3] at filter at SimpleApp.java:39) (first 15 tasks are for partitions Vector(0)) 20/03/01 13:06:58 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks 20/03/01 13:06:58 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, PROCESS_LOCAL, 7927 bytes) 20/03/01 13:06:58 INFO Executor: Running task 0.0 in stage 1.0 (TID 1) 20/03/01 13:06:58 INFO HadoopRDD: Input split: file:/home/zahid/spark/spark-3.0.0-preview2-bin-hadoop2.7/README.md:0+4666 20/03/01 13:06:58 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 832 bytes result sent to driver 20/03/01 13:06:58 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 9 ms on localhost (executor driver) (1/1) 20/03/01 13:06:58 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 20/03/01 13:06:58 INFO DAGScheduler: ResultStage 1 (count at SimpleApp.java:39) finished in 0.015 s 20/03/01 13:06:58 INFO DAGScheduler: Job 1 finished: count at SimpleApp.java:39, took 0.017328 s Lines with a: 65, lines with b: 33 20/03/01 13:06:58 ERROR ContextCleaner: Error in cleaning thread java.lang.InterruptedException at java.base/java.lang.Object.wait(Native Method) at java.base/java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:155) at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:181) at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1302) at org.apache.spark.ContextCleaner.org $apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:178) at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:73) 20/03/01 13:06:58 ERROR Utils: uncaught error in thread spark-listener-group-appStatus, stopping SparkContext java.lang.InterruptedException at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056) at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2090) at java.base/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply$mcJ$sp(AsyncEventQueue.scala:97) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply(AsyncEventQueue.scala:87) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply(AsyncEventQueue.scala:87) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.scheduler.AsyncEventQueue.org $apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:87) at org.apache.spark.scheduler.AsyncEventQueue$$anon$1$$anonfun$run$1.apply$mcV$sp(AsyncEventQueue.scala:83) at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1302) at org.apache.spark.scheduler.AsyncEventQueue$$anon$1.run(AsyncEventQueue.scala:82) 20/03/01 13:06:58 WARN FileSystem: exception in the cleaner thread but it will continue to run java.lang.InterruptedException at java.base/java.lang.Object.wait(Native Method) at java.base/java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:155) at java.base/java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:176) at org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:2989) at java.base/java.lang.Thread.run(Thread.java:830) 20/03/01 13:06:58 ERROR Utils: uncaught error in thread spark-listener-group-executorManagement, stopping SparkContext java.lang.InterruptedException at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056) at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2090) at java.base/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply$mcJ$sp(AsyncEventQueue.scala:97) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply(AsyncEventQueue.scala:87) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply(AsyncEventQueue.scala:87) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.scheduler.AsyncEventQueue.org $apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:87) at org.apache.spark.scheduler.AsyncEventQueue$$anon$1$$anonfun$run$1.apply$mcV$sp(AsyncEventQueue.scala:83) at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1302) at org.apache.spark.scheduler.AsyncEventQueue$$anon$1.run(AsyncEventQueue.scala:82) 20/03/01 13:06:58 ERROR Utils: throw uncaught fatal error in thread spark-listener-group-appStatus java.lang.InterruptedException at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056) at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2090) at java.base/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply$mcJ$sp(AsyncEventQueue.scala:97) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply(AsyncEventQueue.scala:87) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply(AsyncEventQueue.scala:87) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.scheduler.AsyncEventQueue.org $apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:87) at org.apache.spark.scheduler.AsyncEventQueue$$anon$1$$anonfun$run$1.apply$mcV$sp(AsyncEventQueue.scala:83) at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1302) at org.apache.spark.scheduler.AsyncEventQueue$$anon$1.run(AsyncEventQueue.scala:82) [WARNING] java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait (AbstractQueuedSynchronizer.java:2056) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2090) at java.util.concurrent.LinkedBlockingQueue.take (LinkedBlockingQueue.java:433) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply$mcJ$sp (AsyncEventQueue.scala:97) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply (AsyncEventQueue.scala:87) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply (AsyncEventQueue.scala:87) at scala.util.DynamicVariable.withValue (DynamicVariable.scala:58) at org.apache.spark.scheduler.AsyncEventQueue.org$apache$spark$scheduler$AsyncEventQueue$$dispatch (AsyncEventQueue.scala:87) at org.apache.spark.scheduler.AsyncEventQueue$$anon$1$$anonfun$run$1.apply$mcV$sp (AsyncEventQueue.scala:83) at org.apache.spark.util.Utils$.tryOrStopSparkContext (Utils.scala:1302) at org.apache.spark.scheduler.AsyncEventQueue$$anon$1.run20/03/01 13:06:58 ERROR Utils: throw uncaught fatal error in thread spark-listener-group-executorManagement java.lang.InterruptedException at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056) at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2090) ( at java.base/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply$mcJ$sp(AsynAcEventQueue.scala:97) sy at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply(AsyncEventQnueue.scala:87) cEve at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply(AsyncEventQnueue.scala:87) tQ at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) ueue at org.apache.spark.scheduler.AsyncEventQueue.org $apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:87) .sca at org.apache.spark.scheduler.AsyncEventQueue$$anon$1$$anonfun$run$1.apply$mcV$sp(AsyncEventQueue.scala:83) la at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1302) :82 at org.apache.spark.scheduler.AsyncEventQueue$$anon$1.run(AsyncEventQueue.scala:82) ) [WARNING] java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait (AbstractQueuedSynchronizer.java:2056) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2090) at java.util.concurrent.LinkedBlockingQueue.take (LinkedBlockingQueue.java:433) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply$mcJ$sp (AsyncEventQueue.scala:97) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply (AsyncEventQueue.scala:87) at org.apache.spark.scheduler.AsyncEventQueue$$anonfun$org$apache$spark$scheduler$AsyncEventQueue$$dispatch$1.apply (AsyncEventQueue.scala:87) at scala.util.DynamicVariable.withValue (DynamicVariable.scala:58) at org.apache.spark.scheduler.AsyncEventQueue.org$apache$spark$scheduler$AsyncEventQueue$$dispatch (AsyncEventQueue.scala:87) at org.apache.spark.scheduler.AsyncEventQueue$$anon$1$$anonfun$run$1.apply$mcV$sp (AsyncEventQueue.scala:83) at org.apache.spark.util.Utils$.tryOrStopSparkContext (Utils.scala:1302) at org.apache.spark.scheduler.AsyncEventQueue$$anon$1.run (AsyncEventQueue.scala:82) 20/03/01 13:06:58 INFO SparkContext: SparkContext already stopped. 20/03/01 13:06:58 INFO SparkUI: Stopped Spark web UI at http://192.168.0.42:4040 20/03/01 13:07:11 WARN Executor: Issue communicating with driver in heartbeater org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92) at org.apache.spark.executor.Executor.org $apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:841) at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply$mcV$sp(Executor.scala:870) at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply(Executor.scala:870) at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply(Executor.scala:870) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945) at org.apache.spark.executor.Executor$$anon$2.run(Executor.scala:870) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:830) Caused by: org.apache.spark.SparkException: Could not find HeartbeatReceiver. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:135) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:229) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:523) at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:91) ... 12 more [WARNING] thread Thread[dispatcher-event-loop-0,5,co.uk.backbutton.sparksimpleapp.SimpleApp] was interrupted but is still alive after waiting at least 14999msecs [WARNING] thread Thread[dispatcher-event-loop-0,5,co.uk.backbutton.sparksimpleapp.SimpleApp] will linger despite being asked to die via interruption [WARNING] thread Thread[dispatcher-event-loop-1,5,co.uk.backbutton.sparksimpleapp.SimpleApp] will linger despite being asked to die via interruption [WARNING] thread Thread[rpc-server-3-1,5,co.uk.backbutton.sparksimpleapp.SimpleApp] will linger despite being asked to die via interruption [WARNING] thread Thread[RemoteBlock-temp-file-clean-thread,5,co.uk.backbutton.sparksimpleapp.SimpleApp] will linger despite being asked to die via interruption [WARNING] thread Thread[heartbeat-receiver-event-loop-thread,5,co.uk.backbutton.sparksimpleapp.SimpleApp] will linger despite being asked to die via interruption [WARNING] thread Thread[netty-rpc-env-timeout,5,co.uk.backbutton.sparksimpleapp.SimpleApp] will linger despite being asked to die via interruption [WARNING] thread Thread[driver-heartbeater,5,co.uk.backbutton.sparksimpleapp.SimpleApp] will linger despite being asked to die via interruption [WARNING] thread Thread[shuffle-server-5-1,5,co.uk.backbutton.sparksimpleapp.SimpleApp] will linger despite being asked to die via interruption [WARNING] thread Thread[element-tracking-store-worker,5,co.uk.backbutton.sparksimpleapp.SimpleApp] will linger despite being asked to die via interruption [WARNING] thread Thread[org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner,5,co.uk.backbutton.sparksimpleapp.SimpleApp] will linger despite being asked to die via interruption [WARNING] thread Thread[Executor task launch worker for task 1,5,co.uk.backbutton.sparksimpleapp.SimpleApp] will linger despite being asked to die via interruption [WARNING] thread Thread[stop-spark-context,5,co.uk.backbutton.sparksimpleapp.SimpleApp] will linger despite being asked to die via interruption [WARNING] NOTE: 12 thread(s) did not finish despite being asked to via interruption. This is not a problem with20/03/01 13:07:13 ERROR MapOutputTrackerMaster: Error communicating with MapOutputTracker ejava.lang.InterruptedException xec:ja at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1081) va, i at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1369) t is at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:212) a p at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222) rob at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227) lem at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220) wi at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) th t at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92) he at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:76) run at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:266) ning at org.apache.spark.MapOutputTracker.sendTracker(MapOutputTracker.scala:276) co at org.apache.spark.MapOutputTrackerMaster.stop(MapOutputTracker.scala:665) de. at org.apache.spark.SparkEnv.stop(SparkEnv.scala:87) Alth at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1974) ough at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340) not at org.apache.spark.SparkContext.stop(SparkContext.scala:1973) se at org.apache.spark.SparkContext$$anon$3.run(SparkContext.scala:1903) rious, it should be remedied. [WARNING] Couldn't destroy threadgroup org.codehaus.mojo.exec.ExecJavaMojo$IsolatedThreadGroup[name=co.uk.backbutton.sparksimpleapp.SimpleApp,maxpri=10] java.lang.Illegal20/03/01 13:07:13 ERROR Utils: Uncaught exception in thread stop-spark-context Threorg.apache.spark.SparkException: Error communicating with MapOutputTracker adSt at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:270) ate at org.apache.spark.MapOutputTracker.sendTracker(MapOutputTracker.scala:276) Exc at org.apache.spark.MapOutputTrackerMaster.stop(MapOutputTracker.scala:665) ep at org.apache.spark.SparkEnv.stop(SparkEnv.scala:87) tion at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1974) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340) at org.apache.spark.SparkContext.stop(SparkContext.scala:1973) at org.apache.spark.SparkContext$$anon$3.run(SparkContext.scala:1903) Caused by: java.lang.InterruptedException at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1081) at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1369) t scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:212) 1mat at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227) jav at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220) a.la at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) ng.T at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92) hrea at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:76) dGro at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:266) u ... 7 more p.destroy (ThreadGroup.java:775) at org.codehaus.mojo.exec.ExecJavaMojo.execute (ExecJavaMojo.java:321) at org.apache.maven.p20/03/01 13:07:13 INFO SparkContext: Successfully stopped SparkContext lugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289) at org.apache.maven.cli.MavenCli.main (MavenCli.java:193) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:567) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347) [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 17.318 s [INFO] Finished at: 2020-03-01T13:07:13Z [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.6.0:java (default-cli) on project SparkSimpleApp: An exception occured while executing the Java class. null: InterruptedException -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException 20/03/01 13:07:13 INFO DiskBlockManager: Shutdown hook called 20/03/01 13:07:13 INFO ShutdownHookManager: Shutdown hook called 20/03/01 13:07:13 INFO ShutdownHookManager: Deleting directory /tmp/spark-b0db2e4c-b5d8-472c-8319-2fc36e827d0e 20/03/01 13:07:13 INFO ShutdownHookManager: Deleting directory /tmp/spark-b0db2e4c-b5d8-472c-8319-2fc36e827d0e/userFiles-66ec57ab-91b7-4746-8709-3cde436fe1a3 Backbutton.co.uk ¯\_(ツ)_/¯ ♡۶Java♡۶RMI ♡۶ Make Use Method {MUM} makeuse.org <http://www.backbutton.co.uk>