Edit:
1. dc2 node has been removed.
    nodetool status shows only active nodes.
2. Repair done on all nodes.
3. Cassandra restarted

Still it doesn't solve the problem.

On Thu, Apr 28, 2016 at 9:00 AM, Siddharth Verma <
verma.siddha...@snapdeal.com> wrote:

> Hi, If the info could be used
> we are using two DCs
> dc1 - 3 nodes
> dc2 - 1 node
> however, dc2 has been down for 3-4 weeks, and we haven't removed it yet.
>
> spark slaves on same machines as the cassandra nodes.
> each node has two instances of slaves.
>
> spark master on a separate machine.
>
> If anyone could provide insight to the problem, it would be helpful.
>
> Thanks
>
> On Wed, Apr 27, 2016 at 11:11 PM, Siddharth Verma <
> verma.siddha...@snapdeal.com> wrote:
>
>> Hi,
>> I dont know, if someone has faced this problem or not.
>> I am running a job where some data is loaded from cassandra table. From
>> that data, i make some insert and delete statements.
>> and execute it (using forEach)
>>
>> Code snippet:
>> boolean deleteStatus=
>> connector.openSession().execute(delete).wasApplied();
>> boolean  insertStatus =
>> connector.openSession().execute(insert).wasApplied();
>> System.out.println(delete+":"+deleteStatus);
>> System.out.println(insert+":"+insertStatus);
>>
>> When i run it locally, i see the respective results in the table.
>>
>> However when i run it on a cluster, sometimes the result is displayed and
>> sometime the changes don't take place.
>> I saw the stdout from web-ui of spark, and the query along with true was
>> printed for both the queries.
>>
>> I can't understand, what could be the issue.
>>
>> Any help would be appreciated.
>>
>> Thanks,
>> Siddharth Verma
>>
>
>

Reply via email to