Another thing to check is what version of the Spark-Cassandra-Connector the 
Spark Job server passing to the workers. It looks like when you use 
Spark-submit, you are sending the correct SCC jar, but the Spark Job server may 
be using a different one.

Mohammed
Author: Big Data Analytics with 
Spark<http://www.amazon.com/Big-Data-Analytics-Spark-Practitioners/dp/1484209656/>

From: Gerard Maas [mailto:gerard.m...@gmail.com]
Sent: Wednesday, February 3, 2016 4:56 AM
To: Madabhattula Rajesh Kumar
Cc: user@spark.apache.org
Subject: Re: spark-cassandra

NoSuchMethodError usually refers to a version conflict. Probably your job was 
built against a higher version of the cassandra connector than what's available 
on the run time.
Check that the versions are aligned.

-kr, Gerard.

On Wed, Feb 3, 2016 at 1:37 PM, Madabhattula Rajesh Kumar 
<mrajaf...@gmail.com<mailto:mrajaf...@gmail.com>> wrote:
Hi,
I am using Spark Jobserver to submit the jobs. I am using spark-cassandra 
connector to connect to Cassandra. I am getting below exception through spak 
jobserver.
If I submit the job through Spark-Submit command it is working fine,.
Please let me know how to solve this issue


Exception in thread "pool-1-thread-1" java.lang.NoSuchMethodError: 
com.datastax.driver.core.TableMetadata.getIndexes()Ljava/util/List;
    at com.datastax.spark.connector.cql.Schema$.getIndexMap(Schema.scala:193)
    at 
com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchPartitionKey(Schema.scala:197)
    at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:239)
    at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:238)
    at 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
    at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:153)
    at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:306)
    at 
scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
    at 
com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchTables$1(Schema.scala:238)
    at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1$2.apply(Schema.scala:247)
    at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1$2.apply(Schema.scala:246)
    at 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
    at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:153)
    at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:306)
    at 
scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
    at 
com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1(Schema.scala:246)
    at 
com.datastax.spark.connector.cql.Schema$$anonfun$fromCassandra$1.apply(Schema.scala:252)
    at 
com.datastax.spark.connector.cql.Schema$$anonfun$fromCassandra$1.apply(Schema.scala:249)
    at 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withClusterDo$1.apply(CassandraConnector.scala:121)
    at 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withClusterDo$1.apply(CassandraConnector.scala:120)
    at 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110)
    at 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109)
    at 
com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139)
    at 
com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109)
    at 
com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:120)
    at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:249)
    at 
com.datastax.spark.connector.writer.TableWriter$.apply(TableWriter.scala:263)
    at 
com.datastax.spark.connector.RDDFunctions.saveToCassandra(RDDFunctions.scala:36)
    at 
com.cisco.ss.etl.utils.ETLHelper$class.persistBackupConfigDevicesData(ETLHelper.scala:79)
    at com.cisco.ss.etl.Main$.persistBackupConfigDevicesData(Main.scala:13)
    at 
com.cisco.ss.etl.utils.ETLHelper$class.persistByBacthes(ETLHelper.scala:43)
    at com.cisco.ss.etl.Main$.persistByBacthes(Main.scala:13)
    at com.cisco.ss.etl.Main$$anonfun$runJob$3.apply(Main.scala:48)
    at com.cisco.ss.etl.Main$$anonfun$runJob$3.apply(Main.scala:45)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at com.cisco.ss.etl.Main$.runJob(Main.scala:45)
    at com.cisco.ss.etl.Main$.runJob(Main.scala:13)
    at 
spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:274)
    at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Regards,
Rajesh

Reply via email to