g.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:1051)
>> at
>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
>> at org.apache.spark.scheduler.Task.run(Task.scala:56)
>> at
>> org.apache.spark.executor.Ex
ecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/RDD-saveAsTextFile-to-local-disk-tp23725.html
> Sent from the Apache Spark U
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/RDD-saveAsT
Getting exception when wrting RDD to local disk using following function
saveAsTextFile("file:home/someuser/dir2/testupload/20150708/")
The dir (/home/someuser/dir2/testupload/) was created before running the
job. The error message is misleading.
org.apache.spark.SparkException: Job aborte