Looking
at ./core/src/main/scala/org/apache/spark/api/java/JavaSparkContext.scala :

   * Load an RDD saved as a SequenceFile containing serialized objects,
with NullWritable keys and
   * BytesWritable values that contain a serialized partition. This is
still an experimental storage
...
 def objectFile[T](path: String, minPartitions: Int): JavaRDD[T] = {

and ./core/src/main/scala/org/apache/spark/rdd/RDD.scala :

  def saveAsTextFile(path: String): Unit = withScope {
...
    // Therefore, here we provide an explicit Ordering `null` to make sure
the compiler generate
    // same bytecodes for `saveAsTextFile`.

Which hadoop release are you using ?
Can you show us your code so that we can have more context ?

Cheers

On Sat, May 9, 2015 at 9:58 PM, donhoff_h <165612...@qq.com> wrote:

> Hi, experts.
>
> I wrote a spark program to write a sequence file. I found if I used the
> NullWritable as the Key Class of the SequenceFile, the program reported
> exceptions. But if I used the BytesWritable or Text as the Key Class, the
> program did not report the exceptions.
>
> Does spark not support NullWritable class?  The spark version I use is
> 1.3.0 and the exceptions are as following:
>
> ERROR yarn.ApplicationMaster: User class threw exception:
> scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
>   java.lang.NoSuchMethodError:
> scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
>     at
> dhao.test.SeqFile.TestWriteSeqFile02$.main(TestWriteSeqFile02.scala:21)
>     at dhao.test.SeqFile.TestWriteSeqFile02.main(TestWriteSeqFile02.scala)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:480)
>

Reply via email to