Thanks a lot, that fixed the issue :)

On Thu, Sep 4, 2014 at 4:51 PM, Zhan Zhang <zzh...@hortonworks.com> wrote:

> Try this:
> Import org.apache.spark.SparkContext._
>
> Thanks.
>
> Zhan Zhang
>
>
> On Sep 4, 2014, at 4:36 PM, Veeranagouda Mukkanagoudar <veera...@gmail.com>
> wrote:
>
> I am planning to use RDD join operation, to test out i was trying to
> compile some test code, but am getting following compilation error
>
> *value join is not a member of org.apache.spark.rdd.RDD[(String, Int)]*
> *[error]     rddA.join(rddB).map { case (k, (a, b)) => (k, a+b) }*
>
> Code:
>
> import org.apache.spark.{SparkConf, SparkContext}
> import org.apache.spark.rdd.RDD
>
> def joinTest(rddA: RDD[(String, Int)], rddB: RDD[(String, Int)]) :
> RDD[(String, Int)] = {
>     rddA.join(rddB).map { case (k, (a, b)) => (k, a+b) }
> }
>
> Any help would be great .
>
> Veera
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

Reply via email to