Hi Selvan.

I don't deal with Cassandra but have you tried other options as described
here

https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md

To get a Spark RDD that represents a Cassandra table, call the
cassandraTable method on the SparkContext object.

import com.datastax.spark.connector._ //Loads implicit functions
sc.cassandraTable("keyspace name", "table name")



HTH


Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 4 September 2016 at 15:52, Selvam Raman <sel...@gmail.com> wrote:

> its very urgent. please help me guys.
>
> On Sun, Sep 4, 2016 at 8:05 PM, Selvam Raman <sel...@gmail.com> wrote:
>
>> Please help me to solve the issue.
>>
>> spark-shell --packages 
>> com.datastax.spark:spark-cassandra-connector_2.10:1.3.0
>> --conf spark.cassandra.connection.host=******
>>
>> val df = sqlContext.read.
>>      | format("org.apache.spark.sql.cassandra").
>>      | options(Map( "table" -> "****", "keyspace" -> "***")).
>>      | load()
>> java.util.NoSuchElementException: key not found: c_table
>>         at scala.collection.MapLike$class.default(MapLike.scala:228)
>>         at org.apache.spark.sql.execution.datasources.CaseInsensitiveMa
>> p.default(ddl.scala:151)
>>         at scala.collection.MapLike$class.apply(MapLike.scala:141)
>>         at org.apache.spark.sql.execution.datasources.CaseInsensitiveMa
>> p.apply(ddl.scala:151)
>>         at org.apache.spark.sql.cassandra.DefaultSource$.TableRefAndOpt
>> ions(DefaultSource.scala:120)
>>         at org.apache.spark.sql.cassandra.DefaultSource.createRelation(
>> DefaultSource.scala:56)
>>         at org.apache.spark.sql.execution.datasources.ResolvedDataSourc
>> e$.apply(ResolvedDataSource.scala:125)
>>         a
>>
>> --
>> Selvam Raman
>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>
>
>
>
> --
> Selvam Raman
> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>

Reply via email to