Hi, SparkContext.newAPIHadoopRDD() is for working with new Hadoop mapreduce API.
So, you should import import 
org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
Instead of import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;

-----Original Message-----
From: madhvi [mailto:madhvi.gu...@orkash.com] 
Sent: Wednesday, April 22, 2015 5:13 PM
To: user@spark.apache.org
Subject: Error in creating spark RDD

Hi,

I am creating a spark RDD through accumulo writing like:

JavaPairRDD<Key, Value> accumuloRDD =
sc.newAPIHadoopRDD(accumuloJob.getConfiguration(),AccumuloInputFormat.class,Key.class,
Value.class);

But I am getting the following error and it is not getting compiled:

Bound mismatch: The generic method newAPIHadoopRDD(Configuration, Class<F>, 
Class<K>, Class<V>) of type JavaSparkContext is not applicable for the 
arguments (Configuration, Class<AccumuloInputFormat>, Class<Key>, 
Class<Value>). The inferred type AccumuloInputFormat is not a valid substitute 
for the bounded parameter <F extends InputFormat<K,V>>

I am using the following import statements:

import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
import org.apache.accumulo.core.data.Key;
import org.apache.accumulo.core.data.Value;

I am not getting what is the problem in this.

Thanks
Madhvi


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
commands, e-mail: user-h...@spark.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to