Getting exception when wrting RDD to local disk using following function
saveAsTextFile(file:home/someuser/dir2/testupload/20150708/)
The dir (/home/someuser/dir2/testupload/) was created before running the
job. The error message is misleading.
org.apache.spark.SparkException: Job aborted
.
Best
Ayan
On Mon, May 11, 2015 at 5:03 PM, Akhil Das ak...@sigmoidanalytics.com
wrote:
Did you try repartitioning? You might end up with a lot of time spending
on GC though.
Thanks
Best Regards
On Fri, May 8, 2015 at 11:59 PM, Vijay Pawnarkar
vijaypawnar...@gmail.com wrote:
I am
I am using the Spark Cassandra connector to work with a table with 3
million records. Using .where() API to work with only a certain rows in
this table. Where clause filters the data to 1 rows.
CassandraJavaUtil.javaFunctions(sparkContext) .cassandraTable(KEY_SPACE,
MY_TABLE,