[ 
https://issues.apache.org/jira/browse/SPARK-14229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15216759#comment-15216759
 ] 

Russell Jurney commented on SPARK-14229:
----------------------------------------

To reproduce for MongoDB:

The data I am having trouble storing is at 
s3://agile_data_science/on_time_performance.parquet and its ACL is set to 
public, so you should be able to load it. If you can't reproduce, let me know 
and I will try to setup S3 access from my machine (I'm working in local mode).

My init code:

{code:title=init.py}
import json
import pymongo
import pymongo_spark
pymongo_spark.activate()
{code}

To reproduce a successful loading of this data, as a textFile to an RDD without 
ever being a DataFrame (note that this generates 20GB of temporary files, be 
sure you have the space!):

{code:title=success.py}
on_time_lines = 
sc.textFile("s3://agile_data_science/On_Time_On_Time_Performance_2015.jsonl.gz")
on_time_performance = on_time_lines.map(lambda x: json.loads(x))
on_time_performance.saveToMongoDB('mongodb://localhost:27017/agile_data_science.on_time_performance')
{code}

To reproduce a failed loading of this data when saving the RDD of a DataFrame 
or its descendants: 

{code:title=failure.py}
on_time_dataframe = 
sqlContext.read.parquet('s3://agile_data_science/on_time_performance.parquet')
on_time_dataframe = on_time_dataframe.drop("") # remove empty field
on_time_dataframe.rdd.saveToMongoDB('mongodb://localhost:27017/agile_data_science.on_time_performance')
{code}

And with the longer syntax:

{code:title=long_failure.py}
config = {"mongo.output.uri": 
"mongodb://localhost:27017/agile_data_science.on_time_performance"}
on_time_dataframe.rdd.saveAsNewAPIHadoopFile(
  path='file://unused', 
  outputFormatClass='com.mongodb.hadoop.MongoOutputFormat',
  keyClass='org.apache.hadoop.io.Text', 
  valueClass='org.apache.hadoop.io.MapWritable', 
  conf=config
)
{code}

The documents look like:

{code}
Row(Year=2015, Quarter=1, Month=1, DayofMonth=1, DayOfWeek=4, 
FlightDate=u'2015-01-01', UniqueCarrier=u'AA', AirlineID=19805, Carrier=u'AA', 
TailNum=u'N787AA', FlightNum=1, OriginAirportID=12478, 
OriginAirportSeqID=1247802, OriginCityMarketID=31703, Origin=u'JFK', 
OriginCityName=u'New York, NY', OriginState=u'NY', OriginStateFips=36, 
OriginStateName=u'New York', OriginWac=22, DestAirportID=12892, 
DestAirportSeqID=1289203, DestCityMarketID=32575, Dest=u'LAX', 
DestCityName=u'Los Angeles, CA', DestState=u'CA', DestStateFips=6, 
DestStateName=u'California', DestWac=91, CRSDepTime=900, DepTime=855, 
DepDelay=-5.0, DepDelayMinutes=0.0, DepDel15=0.0, DepartureDelayGroups=-1, 
DepTimeBlk=u'0900-0959', TaxiOut=17.0, WheelsOff=912, WheelsOn=1230, 
TaxiIn=7.0, CRSArrTime=1230, ArrTime=1237, ArrDelay=7.0, ArrDelayMinutes=7.0, 
ArrDel15=0.0, ArrivalDelayGroups=0, ArrTimeBlk=u'1200-1259', Cancelled=0.0, 
CancellationCode=u'', Diverted=0.0, CRSElapsedTime=390.0, 
ActualElapsedTime=402.0, AirTime=378.0, Flights=1.0, Distance=2475.0, 
DistanceGroup=10, CarrierDelay=None, WeatherDelay=None, NASDelay=None, 
SecurityDelay=None, LateAircraftDelay=None, FirstDepTime=None, 
TotalAddGTime=None, LongestAddGTime=None, DivAirportLandings=0, 
DivReachedDest=None, DivActualElapsedTime=None, DivArrDelay=None, 
DivDistance=None, Div1Airport=u'', Div1AirportID=None, Div1AirportSeqID=None, 
Div1WheelsOn=None, Div1TotalGTime=None, Div1LongestGTime=None, 
Div1WheelsOff=None, Div1TailNum=u'', Div2Airport=u'', Div2AirportID=None, 
Div2AirportSeqID=None, Div2WheelsOn=None, Div2TotalGTime=None, 
Div2LongestGTime=None, Div2WheelsOff=None, Div2TailNum=u'', Div3Airport=u'', 
Div3AirportID=None, Div3AirportSeqID=None, Div3WheelsOn=None, 
Div3TotalGTime=None, Div3LongestGTime=None, Div3WheelsOff=u'', Div3TailNum=u'', 
Div4Airport=u'', Div4AirportID=u'', Div4AirportSeqID=u'', Div4WheelsOn=u'', 
Div4TotalGTime=u'', Div4LongestGTime=u'', Div4WheelsOff=u'', Div4TailNum=u'', 
Div5Airport=u'', Div5AirportID=u'', Div5AirportSeqID=u'', Div5WheelsOn=u'', 
Div5TotalGTime=u'', Div5LongestGTime=u'', Div5WheelsOff=u'', Div5TailNum=u'')
{code}

As JSON:

{code:title=data.json}
{"Year":2015,"Quarter":1,"Month":1,"DayofMonth":10,"DayOfWeek":6,"FlightDate":"2015-01-10","UniqueCarrier":"AA","AirlineID":19805,"Carrier":"AA","TailNum":"N790AA","FlightNum":1,"OriginAirportID":12478,"OriginAirportSeqID":1247802,"OriginCityMarketID":31703,"Origin":"JFK","OriginCityName":"New
 York, NY","OriginState":"NY","OriginStateFips":36,"OriginStateName":"New 
York","OriginWac":22,"DestAirportID":12892,"DestAirportSeqID":1289203,"DestCityMarketID":32575,"Dest":"LAX","DestCityName":"Los
 Angeles, 
CA","DestState":"CA","DestStateFips":6,"DestStateName":"California","DestWac":91,"CRSDepTime":900,"DepTime":903,"DepDelay":3.0,"DepDelayMinutes":3.0,"DepDel15":0.0,"DepartureDelayGroups":0,"DepTimeBlk":"0900-0959","TaxiOut":37.0,"WheelsOff":940,"WheelsOn":1225,"TaxiIn":10.0,"CRSArrTime":1235,"ArrTime":1235,"ArrDelay":0.0,"ArrDelayMinutes":0.0,"ArrDel15":0.0,"ArrivalDelayGroups":0,"ArrTimeBlk":"1200-1259","Cancelled":0.0,"CancellationCode":"","Diverted":0.0,"CRSElapsedTime":395.0,"ActualElapsedTime":392.0,"AirTime":345.0,"Flights":1.0,"Distance":2475.0,"DistanceGroup":10,"DivAirportLandings":0,"Div1Airport":"","Div1TailNum":"","Div2Airport":"","Div2TailNum":"","Div3Airport":"","Div3WheelsOff":"","Div3TailNum":"","Div4Airport":"","Div4AirportID":"","Div4AirportSeqID":"","Div4WheelsOn":"","Div4TotalGTime":"","Div4LongestGTime":"","Div4WheelsOff":"","Div4TailNum":"","Div5Airport":"","Div5AirportID":"","Div5AirportSeqID":"","Div5WheelsOn":"","Div5TotalGTime":"","Div5LongestGTime":"","Div5WheelsOff":"","Div5TailNum":"","":""}
{code}
Thanks!

> PySpark DataFrame.rdd's can't be saved to an arbitrary Hadoop OutputFormat
> --------------------------------------------------------------------------
>
>                 Key: SPARK-14229
>                 URL: https://issues.apache.org/jira/browse/SPARK-14229
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output, PySpark, Spark Shell
>    Affects Versions: 1.6.1
>            Reporter: Russell Jurney
>
> I am able to save data to MongoDB from any RDD... provided that RDD does not 
> belong to a DataFrame. If I use DataFrame.rdd, it is not possible to save via 
> saveAsNewAPIHadoopFile whatsoever. I have tested that this applies to saving 
> to MongoDB, BSON Files, and ElasticSearch.
> I get the following error when I try to save to a HadoopFile:
> config = {"mongo.output.uri": 
> "mongodb://localhost:27017/agile_data_science.on_time_performance"}
> n [3]: on_time_dataframe.rdd.saveAsNewAPIHadoopFile(
>    ...:   path='file://unused', 
>    ...:   outputFormatClass='com.mongodb.hadoop.MongoOutputFormat',
>    ...:   keyClass='org.apache.hadoop.io.Text', 
>    ...:   valueClass='org.apache.hadoop.io.MapWritable', 
>    ...:   conf=config
>    ...: )
> 16/03/28 19:59:57 INFO storage.MemoryStore: Block broadcast_1 stored as 
> values in memory (estimated size 62.7 KB, free 147.3 KB)
> 16/03/28 19:59:57 INFO storage.MemoryStore: Block broadcast_1_piece0 stored 
> as bytes in memory (estimated size 20.4 KB, free 167.7 KB)
> 16/03/28 19:59:57 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in 
> memory on localhost:61301 (size: 20.4 KB, free: 511.1 MB)
> 16/03/28 19:59:57 INFO spark.SparkContext: Created broadcast 1 from 
> javaToPython at NativeMethodAccessorImpl.java:-2
> 16/03/28 19:59:57 INFO Configuration.deprecation: mapred.min.split.size is 
> deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
> 16/03/28 19:59:57 INFO parquet.ParquetRelation: Reading Parquet file(s) from 
> file:/Users/rjurney/Software/Agile_Data_Code_2/data/on_time_performance.parquet/part-r-00000-32089f1b-5447-4a75-b008-4fd0a0a8b846.gz.parquet
> 16/03/28 19:59:57 INFO spark.SparkContext: Starting job: take at 
> SerDeUtil.scala:231
> 16/03/28 19:59:57 INFO scheduler.DAGScheduler: Got job 1 (take at 
> SerDeUtil.scala:231) with 1 output partitions
> 16/03/28 19:59:57 INFO scheduler.DAGScheduler: Final stage: ResultStage 1 
> (take at SerDeUtil.scala:231)
> 16/03/28 19:59:57 INFO scheduler.DAGScheduler: Parents of final stage: List()
> 16/03/28 19:59:57 INFO scheduler.DAGScheduler: Missing parents: List()
> 16/03/28 19:59:57 INFO scheduler.DAGScheduler: Submitting ResultStage 1 
> (MapPartitionsRDD[6] at mapPartitions at SerDeUtil.scala:146), which has no 
> missing parents
> 16/03/28 19:59:57 INFO storage.MemoryStore: Block broadcast_2 stored as 
> values in memory (estimated size 14.9 KB, free 182.6 KB)
> 16/03/28 19:59:57 INFO storage.MemoryStore: Block broadcast_2_piece0 stored 
> as bytes in memory (estimated size 7.5 KB, free 190.1 KB)
> 16/03/28 19:59:57 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in 
> memory on localhost:61301 (size: 7.5 KB, free: 511.1 MB)
> 16/03/28 19:59:57 INFO spark.SparkContext: Created broadcast 2 from broadcast 
> at DAGScheduler.scala:1006
> 16/03/28 19:59:57 INFO scheduler.DAGScheduler: Submitting 1 missing tasks 
> from ResultStage 1 (MapPartitionsRDD[6] at mapPartitions at 
> SerDeUtil.scala:146)
> 16/03/28 19:59:57 INFO scheduler.TaskSchedulerImpl: Adding task set 1.0 with 
> 1 tasks
> 16/03/28 19:59:57 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 
> 1.0 (TID 8, localhost, partition 0,PROCESS_LOCAL, 2739 bytes)
> 16/03/28 19:59:57 INFO executor.Executor: Running task 0.0 in stage 1.0 (TID 
> 8)
> 16/03/28 19:59:58 INFO 
> parquet.ParquetRelation$$anonfun$buildInternalScan$1$$anon$1: Input split: 
> ParquetInputSplit{part: 
> file:/Users/rjurney/Software/Agile_Data_Code_2/data/on_time_performance.parquet/part-r-00000-32089f1b-5447-4a75-b008-4fd0a0a8b846.gz.parquet
>  start: 0 end: 134217728 length: 134217728 hosts: []}
> 16/03/28 19:59:59 INFO compress.CodecPool: Got brand-new decompressor [.gz]
> 16/03/28 19:59:59 ERROR executor.Executor: Exception in task 0.0 in stage 1.0 
> (TID 8)
> net.razorvine.pickle.PickleException: expected zero arguments for 
> construction of ClassDict (for pyspark.sql.types._create_row)
>       at 
> net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
>       at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
>       at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:175)
>       at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
>       at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
>       at 
> org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:150)
>       at 
> org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:149)
>       at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
>       at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>       at 
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>       at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
>       at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>       at 
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>       at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>       at 
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>       at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>       at org.apache.spark.scheduler.Task.run(Task.scala:89)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Traceback (most recent call last):
>   File 
> "/Users/rjurney/Software/Agile_Data_Code_2/spark/python/lib/pyspark.zip/pyspark/daemon.py",
>  line 157, in manager
>   File 
> "/Users/rjurney/Software/Agile_Data_Code_2/spark/python/lib/pyspark.zip/pyspark/daemon.py",
>  line 61, in worker
>   File 
> "/Users/rjurney/Software/Agile_Data_Code_2/spark/python/lib/pyspark.zip/pyspark/worker.py",
>  line 136, in main
>     if read_int(infile) == SpecialLengths.END_OF_STREAM:
>   File 
> "/Users/rjurney/Software/Agile_Data_Code_2/spark/python/lib/pyspark.zip/pyspark/serializers.py",
>  line 545, in read_int
>     raise EOFError
> EOFError
> 16/03/28 19:59:59 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 1.0 
> (TID 8, localhost): net.razorvine.pickle.PickleException: expected zero 
> arguments for construction of ClassDict (for pyspark.sql.types._create_row)
>       at 
> net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
>       at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
>       at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:175)
>       at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
>       at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
>       at 
> org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:150)
>       at 
> org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:149)
>       at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
>       at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>       at 
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>       at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
>       at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>       at 
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>       at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>       at 
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>       at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>       at org.apache.spark.scheduler.Task.run(Task.scala:89)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 16/03/28 19:59:59 ERROR scheduler.TaskSetManager: Task 0 in stage 1.0 failed 
> 1 times; aborting job
> 16/03/28 19:59:59 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0, 
> whose tasks have all completed, from pool 
> 16/03/28 19:59:59 INFO scheduler.TaskSchedulerImpl: Cancelling stage 1
> 16/03/28 19:59:59 INFO scheduler.DAGScheduler: ResultStage 1 (take at 
> SerDeUtil.scala:231) failed in 1.683 s
> 16/03/28 19:59:59 INFO scheduler.DAGScheduler: Job 1 failed: take at 
> SerDeUtil.scala:231, took 1.703169 s
> ---------------------------------------------------------------------------
> Py4JJavaError                             Traceback (most recent call last)
> <ipython-input-3-c91c1bc7b72a> in <module>()
>       4   keyClass='org.apache.hadoop.io.Text',
>       5   valueClass='org.apache.hadoop.io.MapWritable',
> ----> 6   conf=config
>       7 )
> /Users/rjurney/Software/Agile_Data_Code_2/spark/python/pyspark/rdd.pyc in 
> saveAsNewAPIHadoopFile(self, path, outputFormatClass, keyClass, valueClass, 
> keyConverter, valueConverter, conf)
>    1372                                                        
> outputFormatClass,
>    1373                                                        keyClass, 
> valueClass,
> -> 1374                                                        keyConverter, 
> valueConverter, jconf)
>    1375 
>    1376     def saveAsHadoopDataset(self, conf, keyConverter=None, 
> valueConverter=None):
> /Users/rjurney/Software/Agile_Data_Code_2/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py
>  in __call__(self, *args)
>     811         answer = self.gateway_client.send_command(command)
>     812         return_value = get_return_value(
> --> 813             answer, self.gateway_client, self.target_id, self.name)
>     814 
>     815         for temp_arg in temp_args:
> /Users/rjurney/Software/Agile_Data_Code_2/spark/python/pyspark/sql/utils.pyc 
> in deco(*a, **kw)
>      43     def deco(*a, **kw):
>      44         try:
> ---> 45             return f(*a, **kw)
>      46         except py4j.protocol.Py4JJavaError as e:
>      47             s = e.java_exception.toString()
> /Users/rjurney/Software/Agile_Data_Code_2/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py
>  in get_return_value(answer, gateway_client, target_id, name)
>     306                 raise Py4JJavaError(
>     307                     "An error occurred while calling {0}{1}{2}.\n".
> --> 308                     format(target_id, ".", name), value)
>     309             else:
>     310                 raise Py4JError(
> Py4JJavaError: An error occurred while calling 
> z:org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 
> in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 
> (TID 8, localhost): net.razorvine.pickle.PickleException: expected zero 
> arguments for construction of ClassDict (for pyspark.sql.types._create_row)
>       at 
> net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
>       at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
>       at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:175)
>       at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
>       at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
>       at 
> org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:150)
>       at 
> org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:149)
>       at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
>       at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>       at 
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>       at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
>       at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>       at 
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>       at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>       at 
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>       at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>       at org.apache.spark.scheduler.Task.run(Task.scala:89)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace:
>       at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
>       at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>       at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>       at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
>       at scala.Option.foreach(Option.scala:236)
>       at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
>       at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>       at 
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
>       at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1328)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
>       at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
>       at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
>       at 
> org.apache.spark.api.python.SerDeUtil$.pythonToPairRDD(SerDeUtil.scala:231)
>       at 
> org.apache.spark.api.python.PythonRDD$.saveAsNewAPIHadoopFile(PythonRDD.scala:775)
>       at 
> org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile(PythonRDD.scala)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:497)
>       at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
>       at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
>       at py4j.Gateway.invoke(Gateway.java:259)
>       at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
>       at py4j.commands.CallCommand.execute(CallCommand.java:79)
>       at py4j.GatewayConnection.run(GatewayConnection.java:209)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: net.razorvine.pickle.PickleException: expected zero arguments for 
> construction of ClassDict (for pyspark.sql.types._create_row)
>       at 
> net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
>       at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
>       at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:175)
>       at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
>       at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
>       at 
> org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:150)
>       at 
> org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:149)
>       at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
>       at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>       at 
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>       at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
>       at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>       at 
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>       at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>       at 
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>       at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>       at org.apache.spark.scheduler.Task.run(Task.scala:89)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       ... 1 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to