Don't worry about the implicit params, those are filled in by the compiler. All 
you need to do is provide a key and value type, and a path. Look at how 
sequenceFile gets used in this test:

https://git-wip-us.apache.org/repos/asf?p=incubator-spark.git;a=blob;f=core/src/test/scala/spark/FileSuite.scala;hb=af3c9d50

In particular, the K and V in Spark can be any Writable class, *or* primitive 
types like Int, Double, etc, or String. For the latter ones, Spark 
automatically uses the correct Hadoop Writable (e.g. IntWritable, 
DoubleWritable, Text).

Matei


On Oct 17, 2013, at 5:35 PM, Shay Seng <[email protected]> wrote:

> Hey gurus,
> 
> I'm having a little trouble deciphering the docs for 
> 
> sequenceFile[K, V](path: String, minSplits: Int = defaultMinSplits)(implicit 
> km: ClassManifest[K], vm: ClassManifest[V], kcf: () ⇒WritableConverter[K], 
> vcf: () ⇒ WritableConverter[V]): RDD[(K, V)]
> 
> Does anyone have a short example snippet?
> 
> tks
> shay
> 
> 

Reply via email to