would you try new it ?
*new org.apache.spark.rdd.SequenceFileRDDFunctions[(NullWritable,
BytesWritable)](*out.mapPartitions(iter => iter.grouped(10).map(_.toArray))
      .map(x => (NullWritable.get(), new BytesWritable(serialize(x))))*)*
      .saveAsSequenceFile("output")


and do you import org.apache.spark.SparkContext._ for implicit conversion
work for you ?



On Sat, Jan 4, 2014 at 11:42 PM, Aureliano Buendia <[email protected]>wrote:

> Hi,
>
> I'm trying to create a custom version of saveAsObject(). however, I do not
> seem to be able to use SequenceFileRDDFunctions in my package.
>
> I simply copy/pasted saveAsObject() body to my funtion:
>
> out.mapPartitions(iter => iter.grouped(10).map(_.toArray))
>       .map(x => (NullWritable.get(), new BytesWritable(serialize(x))))
>       .*saveAsSequenceFile*("output")
>
> But that gives me this error:
>
> value saveAsSequenceFile is not a member of
> org.apache.spark.rdd.RDD[(org.apache.hadoop.io.NullWritable,
> org.apache.hadoop.io.BytesWritable)]
> possible cause: maybe a semicolon is missing before `value
> saveAsSequenceFile'?
>       .saveAsSequenceFile("output")
>        ^
>
> Scala implicit conversion error is not of any help here. So I tried to
> apply explicit conversion:
>
> *org.apache.spark.rdd.SequenceFileRDDFunctions[(NullWritable,
> BytesWritable)](*out.mapPartitions(iter =>
> iter.grouped(10).map(_.toArray))
>       .map(x => (NullWritable.get(), new BytesWritable(serialize(x))))*)*
>       .saveAsSequenceFile("output")
>
> Giving me this error:
>
> object SequenceFileRDDFunctions is not a member of package
> org.apache.spark.rdd
> *Note: class SequenceFileRDDFunctions exists, but it has no companion
> object.*
>     org.apache.spark.rdd.SequenceFileRDDFunctions[(NullWritable,
> BytesWritable)](out.mapPartitions(iter => iter.grouped(10).map(_.toArray))
>                          ^
>
> Is this scala compiler version mismatch hell?
>



-- 

~Yours, Xuefeng Wu/吴雪峰  敬上

Reply via email to