Re: Return value of newQueryPlan

2016-09-04 Thread Eric Stevens
For question 2, you can't inform the coordinator that you want it to use a
certain node to fulfill the answer. It'll ask several and use the first
count which answer that satisfy the consistency constraints.

On Fri, Sep 2, 2016, 1:19 PM Siddharth Verma 
wrote:

> I am debugging an issue at our cluster. Trying to find the RCA of it,
> according to our application behavior.
> Used WhiteList policy(asked a question for the same, some time back) but,
> it was stated that it can not guarantee the desired behavior.
> Yes, I forgot to mention, i was referring to Java driver.
> I used DCAwareRoundRobin, TokenAware Policy for application flow.
> Would ask question1 on driver mailing list, If someone could help with
> question 2.
>
>
>
>
>
>
>
>
>
>
> On Fri, Sep 2, 2016 at 6:59 PM, Eric Stevens  wrote:
>
>> These sound like driver-side questions that might be better addressed to
>> your specific driver's mailing list.  But from the terminology I'd guess
>> you're using a DataStax driver, possibly the Java one.
>>
>> If so, you can look at WhiteListPolicy if you want to target specific
>> node(s).  However aside from testing specific scenarios (like performance
>> testing coordinated operations) it's unlikely that with a correctly tuned
>> LBP, you'll be able to do a better job of node selection than the driver is
>> able to.  DCAwareRoundRobin with a child policy of TokenAware will choose a
>> primary or replica node for any operations where it can know in advance.
>>
>> With RF == N like your setup, every piece of data is owned by every node,
>> so as long as your LBP is distributive, and outside of performance testing,
>> I can't see why you'd be needing to target specific nodes for anything.
>>
>> On Fri, Sep 2, 2016 at 1:59 AM Siddharth Verma <
>> verma.siddha...@snapdeal.com> wrote:
>>
>>> Hi,
>>> I have Dc1(3 nodes), Dc2(3 nodes),
>>> RF:3 on both Dcs
>>>
>>> question 1 : when I create my LoadBalancingPolicy, and override
>>> newQueryPlan, the list of hosts from newQueryPlan is the candidate
>>> coordinator list?
>>>
>>> question 2 : Can i force the co-ordintor to hit a particular cassandra
>>> node only. I used consistency LOCAL_ONE, but i guess, i doesn't guarantee
>>> that data will be fetched from it.
>>>
>>> Thanks
>>> Siddharth Verma
>>>
>>
>


Re: Reading cassandra table using Spark

2016-09-04 Thread DuyHai Doan
"com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured
columnfamily size_estimates"

--> this error message occurs usually when using a version of the Java
driver (thus the Spark/Cassandra connector version) not aligned with
Cassandra version

Please give

- C* version
- Spark/C* connector version



On Sun, Sep 4, 2016 at 5:28 PM, Selvam Raman  wrote:

> Hi,
>
> i am trying to read cassandra table as Dataframe but got the below issue
>
> spark-shell --packages com.datastax.spark:spark-cassandra-connector_2.10:1.3.0
> --conf spark.cassandra.connection.host=**
>
> val df = sqlContext.read.
>  | format("org.apache.spark.sql.cassandra").
>  | options(Map( "table" -> "", "keyspace" -> "***")).
>  | load()
> java.util.NoSuchElementException: key not found: c_table
> at scala.collection.MapLike$class.default(MapLike.scala:228)
> at org.apache.spark.sql.execution.datasources.CaseInsensitiveMa
> p.default(ddl.scala:151)
> at scala.collection.MapLike$class.apply(MapLike.scala:141)
> at org.apache.spark.sql.execution.datasources.CaseInsensitiveMa
> p.apply(ddl.scala:151)
> at org.apache.spark.sql.cassandra.DefaultSource$.TableRefAndOpt
> ions(DefaultSource.scala:120)
> at org.apache.spark.sql.cassandra.DefaultSource.createRelation(
> DefaultSource.scala:56)
> at org.apache.spark.sql.execution.datasources.ResolvedDataSourc
> e$.apply(ResolvedDataSource.scala:125)
>
>
> When i am using
>
> SC.CassandraTable(tablename,keyspace) it went fine. but when i crate
> action it throws plenty of error
>
> example:
>  com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured
> columnfamily size_estimates
>
> --
> Selvam Raman
> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>


Reading cassandra table using Spark

2016-09-04 Thread Selvam Raman
Hi,

i am trying to read cassandra table as Dataframe but got the below issue

spark-shell --packages com.datastax.spark:spark-cassandra-connector_2.10:1.3.0
--conf spark.cassandra.connection.host=**

val df = sqlContext.read.
 | format("org.apache.spark.sql.cassandra").
 | options(Map( "table" -> "", "keyspace" -> "***")).
 | load()
java.util.NoSuchElementException: key not found: c_table
at scala.collection.MapLike$class.default(MapLike.scala:228)
at org.apache.spark.sql.execution.datasources.
CaseInsensitiveMap.default(ddl.scala:151)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at org.apache.spark.sql.execution.datasources.
CaseInsensitiveMap.apply(ddl.scala:151)
at org.apache.spark.sql.cassandra.DefaultSource$.TableRefAndOptions(
DefaultSource.scala:120)
at org.apache.spark.sql.cassandra.DefaultSource.
createRelation(DefaultSource.scala:56)
at org.apache.spark.sql.execution.datasources.
ResolvedDataSource$.apply(ResolvedDataSource.scala:125)


When i am using

SC.CassandraTable(tablename,keyspace) it went fine. but when i crate action
it throws plenty of error

example:
 com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured
columnfamily size_estimates

-- 
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"