This is what i meant by 'initial cause'

Caused by: java.lang.ClassNotFoundException: com.datastax.spark.connector.mapper.ColumnMapper

So it is in fact a classpath problem

Here is the class in question https://github.com/datastax/spark-cassandra-connector/blob/master/spark-cassandra-connector/src/main/scala/com/datastax/spark/connector/mapper/ColumnMapper.scala

Maybe it would be worthwhile to put this at the top of your main method

System.out.println(System.getProperty("java.class.path");

and show what that prints.

What version of the cassandra and what version of the cassandra-spark connector are you using, btw?





On 04/02/2015 11:16 PM, Tiwari, Tarun wrote:

Sorry I was unable to reply for couple of days.

I checked the error again and can’t see any other initial cause. Here is the full error that is coming.

Exception in thread "main" java.lang.NoClassDefFoundError: com/datastax/spark/connector/mapper/ColumnMapper

at ldCassandraTable.main(ld_Cassandra_tbl_Job.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:329)

at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

*Caused by: java.lang.ClassNotFoundException: com.datastax.spark.connector.mapper.ColumnMapper*

at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:425)

at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

*From:*Dave Brosius [mailto:dbros...@mebigfatguy.com]
*Sent:* Tuesday, March 31, 2015 8:46 PM
*To:* user@cassandra.apache.org
*Subject:* Re: Getting NoClassDefFoundError for com/datastax/spark/connector/mapper/ColumnMapper

Is there an 'initial cause' listed under that exception you gave? As NoClassDefFoundError is not exactly the same as ClassNotFoundException. It meant that ColumnMapper couldn't initialize it's static initializer, it could be because some other class couldn't be found, or it could be some other non classloader related error.

On 2015-03-31 10:42, Tiwari, Tarun wrote:

    Hi Experts,

    I am getting java.lang.NoClassDefFoundError:
    com/datastax/spark/connector/mapper/ColumnMapper while running a
    app to load data to Cassandra table using the datastax spark connector

    Is there something else I need to import in the program or
    dependencies?

    *RUNTIME ERROR:*  Exception in thread "main"
    java.lang.NoClassDefFoundError:
    com/datastax/spark/connector/mapper/ColumnMapper

    at ldCassandraTable.main(ld_Cassandra_tbl_Job.scala)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    *Below is my scala program*

    /*** ld_Cassandra_Table.scala ***/

    import org.apache.spark.SparkContext

    import org.apache.spark.SparkContext._

    import org.apache.spark.SparkConf

    import com.datastax.spark.connector

    import com.datastax.spark.connector._

    object ldCassandraTable {

    def main(args: Array[String]) {

    val fileName = args(0)

    val tblName = args(1)

    val conf = new
    SparkConf(true).set("spark.cassandra.connection.host", "<MASTER
    HOST>") .setMaster("<MASTER URL>")
    .setAppName("LoadCassandraTableApp")

    val sc = new SparkContext(conf)

    
sc.addJar("/home/analytics/Installers/spark-cassandra-connector-1.1.1/spark-cassandra-connector/target/scala-2.10/spark-cassandra-connector-assembly-1.1.1.jar")

    val normalfill = sc.textFile(fileName).map(line => line.split('|'))

    normalfill.map(line => (line(0), line(1), line(2), line(3),
    line(4), line(5), line(6), line(7), line(8), line(9), line(10),
    line(11), line(12), line(13), line(14), line(15), line(16),
    line(17), line(18), line(19), line(20),
    line(21))).saveToCassandra(keyspace, tblName,
    SomeColumns("wfctotalid", "timesheetitemid", "employeeid",
    "durationsecsqty", "wageamt", "moneyamt", "applydtm",
    "laboracctid", "paycodeid", "startdtm", "stimezoneid",
    "adjstartdtm", "adjapplydtm", "enddtm", "homeaccountsw",
    "notpaidsw", "wfcjoborgid", "unapprovedsw", "durationdaysqty",
    "updatedtm", "totaledversion", "acctapprovalnum"))

    println("Records Loaded to ".format(tblName))

    Thread.sleep(500)

    sc.stop()

    }

    }

    *Below is the sbt file:*

    name:= “POC”

    version := "0.0.1"

    scalaVersion := "2.10.4"

    // additional libraries

    libraryDependencies ++= Seq(

    "org.apache.spark" %% "spark-core" % "1.1.1" % "provided",

    "org.apache.spark" %% "spark-sql" % "1.1.1" % "provided",

    "com.datastax.spark" %% "spark-cassandra-connector" % "1.1.1" %
    "provided"

    )

    Regards,

    *Tarun Tiwari* | Workforce Analytics-ETL | *Kronos India*

    M: +91 9540 28 27 77 | Tel: +91 120 4015200

    Kronos | Time & Attendance • Scheduling • Absence Management • HR
    & Payroll • Hiring • Labor Analytics

    *Join Kronos on: **kronos.com* <http://www.kronos.com/>*|
    **Facebook* <http://www.kronos.com/facebook>*|**Twitter*
    <http://www.kronos.com/twitter>*|**LinkedIn*
    <http://www.kronos.com/linkedin>*|**YouTube*
    <http://www.kronos.com/youtube>


Reply via email to