[
https://issues.apache.org/jira/browse/SPARK-3350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14119499#comment-14119499
]
David Greco commented on SPARK-3350:
------------------------------------
Here you are:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/09/03 09:33:10 INFO SecurityManager: Changing view acls to: dgreco,
14/09/03 09:33:10 INFO SecurityManager: Changing modify acls to: dgreco,
14/09/03 09:33:10 INFO SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(dgreco, ); users
with modify permissions: Set(dgreco, )
14/09/03 09:33:11 INFO Slf4jLogger: Slf4jLogger started
14/09/03 09:33:11 INFO Remoting: Starting remoting
14/09/03 09:33:12 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://[email protected]:49652]
14/09/03 09:33:12 INFO Remoting: Remoting now listens on addresses:
[akka.tcp://[email protected]:49652]
14/09/03 09:33:12 INFO Utils: Successfully started service 'sparkDriver' on
port 49652.
14/09/03 09:33:12 INFO SparkEnv: Registering MapOutputTracker
14/09/03 09:33:12 INFO SparkEnv: Registering BlockManagerMaster
14/09/03 09:33:12 INFO DiskBlockManager: Created local directory at
/var/folders/bw/7dmpv8qx3p75mcmctmchwl5m0000gn/T/spark-local-20140903093312-29d3
14/09/03 09:33:12 INFO Utils: Successfully started service 'Connection manager
for block manager' on port 49653.
14/09/03 09:33:12 INFO ConnectionManager: Bound socket to port 49653 with id =
ConnectionManagerId(192.168.0.21,49653)
14/09/03 09:33:12 INFO MemoryStore: MemoryStore started with capacity 983.1 MB
14/09/03 09:33:12 INFO BlockManagerMaster: Trying to register BlockManager
14/09/03 09:33:12 INFO BlockManagerMasterActor: Registering block manager
192.168.0.21:49653 with 983.1 MB RAM
14/09/03 09:33:12 INFO BlockManagerMaster: Registered BlockManager
14/09/03 09:33:12 INFO HttpFileServer: HTTP File server directory is
/var/folders/bw/7dmpv8qx3p75mcmctmchwl5m0000gn/T/spark-f16014c3-f795-4300-88af-7e161d6f7547
14/09/03 09:33:12 INFO HttpServer: Starting HTTP Server
14/09/03 09:33:12 INFO Utils: Successfully started service 'HTTP file server'
on port 49654.
14/09/03 09:33:13 INFO Utils: Successfully started service 'SparkUI' on port
4040.
14/09/03 09:33:13 INFO SparkUI: Started SparkUI at http://192.168.0.21:4040
14/09/03 09:33:15 WARN NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
14/09/03 09:33:15 INFO AkkaUtils: Connecting to HeartbeatReceiver:
akka.tcp://[email protected]:49652/user/HeartbeatReceiver
14/09/03 09:33:18 INFO deprecation: mapred.tip.id is deprecated. Instead, use
mapreduce.task.id
14/09/03 09:33:18 INFO deprecation: mapred.task.id is deprecated. Instead, use
mapreduce.task.attempt.id
14/09/03 09:33:18 INFO deprecation: mapred.task.is.map is deprecated. Instead,
use mapreduce.task.ismap
14/09/03 09:33:18 INFO deprecation: mapred.task.partition is deprecated.
Instead, use mapreduce.task.partition
14/09/03 09:33:18 INFO deprecation: mapred.job.id is deprecated. Instead, use
mapreduce.job.id
14/09/03 09:33:18 INFO SparkContext: Starting job: saveAsHadoopFile at
AvroWriteTestCase.scala:67
14/09/03 09:33:18 INFO DAGScheduler: Got job 0 (saveAsHadoopFile at
AvroWriteTestCase.scala:67) with 1 output partitions (allowLocal=false)
14/09/03 09:33:18 INFO DAGScheduler: Final stage: Stage 0(saveAsHadoopFile at
AvroWriteTestCase.scala:67)
14/09/03 09:33:18 INFO DAGScheduler: Parents of final stage: List()
14/09/03 09:33:18 INFO DAGScheduler: Missing parents: List()
14/09/03 09:33:18 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[4] at map at
AvroWriteTestCase.scala:66), which has no missing parents
14/09/03 09:33:19 INFO MemoryStore: ensureFreeSpace(45032) called with
curMem=0, maxMem=1030823608
14/09/03 09:33:19 INFO MemoryStore: Block broadcast_0 stored as values in
memory (estimated size 44.0 KB, free 983.0 MB)
14/09/03 09:33:19 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0
(MappedRDD[4] at map at AvroWriteTestCase.scala:66)
14/09/03 09:33:19 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
14/09/03 09:33:19 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0,
localhost, PROCESS_LOCAL, 3191 bytes)
14/09/03 09:33:19 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
14/09/03 09:33:19 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.NullPointerException
at
org.apache.spark.sql.SchemaRDDLike$class.queryExecution(SchemaRDDLike.scala:52)
at
org.apache.spark.sql.SchemaRDD.queryExecution$lzycompute(SchemaRDD.scala:103)
at org.apache.spark.sql.SchemaRDD.queryExecution(SchemaRDD.scala:103)
at org.apache.spark.sql.SchemaRDD.schema(SchemaRDD.scala:126)
at
com.eligotech.hnavigator.prototypes.spark.AvroWriteTestCase$$anonfun$2.apply(AvroWriteTestCase.scala:66)
at
com.eligotech.hnavigator.prototypes.spark.AvroWriteTestCase$$anonfun$2.apply(AvroWriteTestCase.scala:66)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:984)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:974)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
at org.apache.spark.scheduler.Task.run(Task.scala:54)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
14/09/03 09:33:19 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0,
localhost): java.lang.NullPointerException:
org.apache.spark.sql.SchemaRDDLike$class.queryExecution(SchemaRDDLike.scala:52)
org.apache.spark.sql.SchemaRDD.queryExecution$lzycompute(SchemaRDD.scala:103)
org.apache.spark.sql.SchemaRDD.queryExecution(SchemaRDD.scala:103)
org.apache.spark.sql.SchemaRDD.schema(SchemaRDD.scala:126)
com.eligotech.hnavigator.prototypes.spark.AvroWriteTestCase$$anonfun$2.apply(AvroWriteTestCase.scala:66)
com.eligotech.hnavigator.prototypes.spark.AvroWriteTestCase$$anonfun$2.apply(AvroWriteTestCase.scala:66)
scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:984)
org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:974)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
14/09/03 09:33:19 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times;
aborting job
14/09/03 09:33:19 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have
all completed, from pool
14/09/03 09:33:19 INFO TaskSchedulerImpl: Cancelling stage 0
14/09/03 09:33:19 INFO DAGScheduler: Failed to run saveAsHadoopFile at
AvroWriteTestCase.scala:67
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to
stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost
task 0.0 in stage 0.0 (TID 0, localhost): java.lang.NullPointerException:
org.apache.spark.sql.SchemaRDDLike$class.queryExecution(SchemaRDDLike.scala:52)
org.apache.spark.sql.SchemaRDD.queryExecution$lzycompute(SchemaRDD.scala:103)
org.apache.spark.sql.SchemaRDD.queryExecution(SchemaRDD.scala:103)
org.apache.spark.sql.SchemaRDD.schema(SchemaRDD.scala:126)
com.eligotech.hnavigator.prototypes.spark.AvroWriteTestCase$$anonfun$2.apply(AvroWriteTestCase.scala:66)
com.eligotech.hnavigator.prototypes.spark.AvroWriteTestCase$$anonfun$2.apply(AvroWriteTestCase.scala:66)
scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:984)
org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:974)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
at scala.Option.foreach(Option.scala:236)
at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Strange anomaly trying to write a SchemaRDD into an Avro file
> -------------------------------------------------------------
>
> Key: SPARK-3350
> URL: https://issues.apache.org/jira/browse/SPARK-3350
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Environment: jdk1.7, macosx
> Reporter: David Greco
> Attachments: AvroWriteTestCase.scala
>
>
> I found a way to automatically save a SchemaRDD in Avro format, similarly to
> what Spark does with parquet file.
> I attached a test case to this issue. The code fails with a NPE.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]