Re: Cassandra MV insert Warning

2016-12-14 Thread Benjamin Roth
It is a bug that may happen on MV lock contention and fixed by
https://issues.apache.org/jira/browse/CASSANDRA-12689

2016-12-15 7:58 GMT+01:00 안정아 :

> Hi, All.
>
>
>
> *Issue on MaterializedViews Inserting
>
>
>
> environment) 6-cassandra servers 1cluster Version 3.7 JAVA version 1.8
> cassandra java driver 3.1.0
>
> data modiling ) column name : ex] A, B, C BaseTable((A, B), C),
> MV1((A, B), D, E), MV2((A, B), F, G), MV3((A, B), H, I)
>
>
>
> details ) I just executed 'insert'(using LWT .ifNotExists()) stress test
> on the BaseTable through grinder.
>
> After 1~2 hours later, (Executed Tests :15,746,227) a cassandra on one
> server goes down.
>
> (It did not go down when I perform the same test with only the BaseTable
> without Materialized views)
>
> The System.log(from the dead cassandra server) prints out the message
> below when the server starts going unstable(when the performance starts
> jittering).
>
> It might be similar issue with https://issues.apache.org/
> jira/browse/CASSANDRA-11290
>
> But I'm not sure, and the issue is still open.
>
>
>
> [ WARN [SharedPool-Worker-92] 2016-12-13 18:21:58,439
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread
> Thread[SharedPool-Worker-92,5,main]: {} java.lang.NullPointerException:
> null at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:541)
> ~[apache-cassandra-3.7.0.jar:3.7.0] at org.apache.cassandra.db.
> Keyspace.lambda$apply$74(Keyspace.java:469) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ~[na:1.8.0_65] at org.apache.cassandra.concurrent.
> AbstractLocalAwareExecutorService$FutureTask.run(
> AbstractLocalAwareExecutorService.java:164) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
> [apache-cassandra-3.7.0.jar:3.7.0] at java.lang.Thread.run(Thread.java:745)
> [na:1.8.0_65] WARN [SharedPool-Worker-12] 2016-12-13 18:21:58,440
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread
> Thread[SharedPool-Worker-12,5,main]: {} 
> org.apache.cassandra.exceptions.WriteTimeoutException:
> Operation timed out - received only 0 responses. at org.apache.cassandra.db.
> Keyspace.apply(Keyspace.java:460) ~[apache-cassandra-3.7.0.jar:3.7.0] at
> org.apache.cassandra.db.Keyspace.lambda$apply$74(Keyspace.java:469)
> ~[apache-cassandra-3.7.0.jar:3.7.0] at java.util.concurrent.
> Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_65] at
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorServ
> ice$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
> ~[apache-cassandra-3.7.0.jar:3.7.0] at org.apache.cassandra.
> concurrent.SEPWorker.run(SEPWorker.java:105)
> [apache-cassandra-3.7.0.jar:3.7.0] at java.lang.Thread.run(Thread.java:745)
> [na:1.8.0_65] ]
>
>
>
>
>
> Could this be a bug from MV? or LWT? or both? Thanks!
>
>
>
>
>
>
>
>
>
>
>
>
>



-- 
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer


Cassandra MV insert Warning

2016-12-14 Thread 안정아


Hi, All.
 
*Issue on MaterializedViews Inserting 
 
environment) 6-cassandra servers 1cluster Version 3.7 JAVA version 1.8 cassandra java driver 3.1.0 
data modiling ) column name : ex] A, B, C BaseTable((A, B), C), MV1((A, B), D, E), MV2((A, B), F, G), MV3((A, B), H, I) 
 
details ) I just executed 'insert'(using LWT .ifNotExists()) stress test on the BaseTable through grinder. 
After 1~2 hours later, (Executed Tests :15,746,227) a cassandra on one server goes down. 
(It did not go down when I perform the same test with only the BaseTable without Materialized views) 
The System.log(from the dead cassandra server) prints out the message below when the server starts going unstable(when the performance starts jittering). 
It might be similar issue with https://issues.apache.org/jira/browse/CASSANDRA-11290 
But I'm not sure, and the issue is still open. 
 
[ WARN [SharedPool-Worker-92] 2016-12-13 18:21:58,439 AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread Thread[SharedPool-Worker-92,5,main]: {} java.lang.NullPointerException: null at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:541) ~[apache-cassandra-3.7.0.jar:3.7.0] at org.apache.cassandra.db.Keyspace.lambda$apply$74(Keyspace.java:469) ~[apache-cassandra-3.7.0.jar:3.7.0] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_65] at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) ~[apache-cassandra-3.7.0.jar:3.7.0] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-3.7.0.jar:3.7.0] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65] WARN [SharedPool-Worker-12] 2016-12-13 18:21:58,440 AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread Thread[SharedPool-Worker-12,5,main]: {} org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - received only 0 responses. at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:460) ~[apache-cassandra-3.7.0.jar:3.7.0] at org.apache.cassandra.db.Keyspace.lambda$apply$74(Keyspace.java:469) ~[apache-cassandra-3.7.0.jar:3.7.0] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_65] at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) ~[apache-cassandra-3.7.0.jar:3.7.0] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-3.7.0.jar:3.7.0] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65] ] 
 
 
Could this be a bug from MV? or LWT? or both? Thanks! 
 
 
 
 

 
 


Re: Join_ring=false Use Cases

2016-12-14 Thread Anuj Wadehra
Can anyone help me with join_ring and address my concerns?

Thanks
Anuj 
 
  On Tue, 13 Dec, 2016 at 11:31 PM, Anuj Wadehra wrote: 
   Hi,
I need to understand the use case of join_ring=false in case of node outages. 
As per https://issues.apache.org/jira/browse/CASSANDRA-6961, you would want 
join_ring=false when you have to repair a node before bringing a node back 
after some considerable outage. The problem I see with join_ring=false is that 
unlike autobootstrap, the node will NOT accept writes while you are running 
repair on it. If a node was down for 5 hours and you bring it back with 
join_ring=false, repair the node for 7 hours and then make it join the ring, it 
will STILL have missed writes because while the time repair was running (7 
hrs), writes only went to other others. So, if you want to make sure that reads 
served by the restored node at CL ONE will return consistent data after the 
node has joined, you wont get that as writes have been missed while the node is 
being repaired. And if you work with Read/Write CL=QUORUM, even if you bring 
back the node without join_ring=false, you would anyways get the desired 
consistency. So, how join_ring would provide any additional consistency in this 
case ??
I can see join_ring=false useful only when I am restoring from Snapshot or 
bootstrapping and there are dropped mutations in my cluster which are not fixed 
by hinted handoff.
For Example: 3 nodes A,B,C working at Read/Write CL QUORUM. Hinted Handoff=3 
hrs.10 AM Snapshot taken on all 3 nodes11 AM: Node B goes down for 4 hours3 PM: 
Node B comes up but data is not repaired. So, 1 hr of dropped mutations (2-3 
PM) not fixed via Hinted Handoff.5 PM: Node A crashes.6 PM: Node A restored 
from 10 AM Snapshot, Node A started with join_ring=false, repaired and then 
joined the cluster.
In above restore snapshot example, updates from 2-3 PM were outside hinted 
handoff window of 3 hours. Thus, node B wont get those updates. Node A data for 
2-3 PM is already lost. So, 2-3 PM updates are only on one replica i.e. node C 
and minimum consistency needed is QUORUM so join_ring=false would help. But 
this is very specific use case.  
ThanksAnuj
  


Re: Configure NTP for Cassandra

2016-12-14 Thread Anuj Wadehra
Thanks Martin. Agree, setting up our own internal servers will help save some 
firewall traffic, simplify security management and reduce load on public 
servers which is an ethical thing to do. As the blog recommended setting up own 
internal servers for Cassandra, I wanted to make sure that there are no 
Cassandra specific benefits e.g. better relative time synchronization achieved 
with an internal setup. So, I would conclude it this way : Even though its not 
a good practice to directly access external NTP servers via Cassandra nodes, 
Cassandra can still achieve tight relative time synchronization using reliable 
external servers. There is no madate to setup your own pool of internal NTP 
servers for BETTER time synchronization.
Thanks for your inputs.Anuj 
 
  On Wed, 14 Dec, 2016 at 3:22 AM, Martin Schröder wrote:   
2016-11-26 20:20 GMT+01:00 Anuj Wadehra :
> 1. If my ISP provider is providing me a pool of reliable NTP servers, should
> I setup my own internal servers anyway or can I sync Cassandra nodes
> directly to the ISP provided servers and select one of the servers as
> preferred for relative clock synchronization?

Set up three ntp servers which uses the provider servers _and_ pool servers
and sync your other machines from these servers (and maybe get gps receivers
for your ntp servers). This reduces ntp traffic at your firewall (your servers
act as proxies) and reduces load on public servers.

> 2. As per my understanding, peer association is ONLY for backup scenario .
> If a peer loses time synchronization source, then other peers can be used
> for time synchronization. Thus providing a HA service. But when everything
> is ok (happy path), does defining NTP servers synced from different sources
> as peers lead them to converge time as mentioned in some forums?

Maybe; but the difference will be negligible (sub milliseconds).
I wouldn't worry about that.

Best
  Martin
  


Re: Cassandra Different cluster gossiping to each other

2016-12-14 Thread Harikrishnan Pillai
This is possible if some of the nodes are available in system peer table of the 
other cluster.this usually occurs when we decommission nodes from one cluster 
and add to another cluster.also make sure that before adding a node newly to a 
cluster all data in drives are properly wiped out .

Sent from my iPhone

On Dec 14, 2016, at 3:11 AM, Abhishek Kumar Maheshwari 
>
 wrote:

Hi All,

I am getting below log in my system.log


GossipDigestSynVerbHandler.java:52 - ClusterName mismatch from /192.XXX.AA.133 
QA Columbia Cluster! = QA Columbia Cluster new

192.XXX.AA.133 Cluster name is QA Columbia Cluster
And on which server I am getting this error cluster name is: QA Columbia 
Cluster new

I am using apache-cassandra-2.2.3. Please let me know how I can fix this.



Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

A must visit exhibition for all Fitness and Sports Freaks. TOI Global Sports 
Business Show from 21 to 23 December 2016 Bombay Exhibition Centre, Mumbai. 
Meet the legends Kaizzad Capadia, Bhaichung Bhutia and more. Join the workshops 
on Boxing & Football and more. www.TOI-GSBS.com


Re: Cassandra Different cluster gossiping to each other

2016-12-14 Thread Jeff Jirsa
You have somehow mixed seeds (or re-used IP addresses). The good news is that 
different cluster names have prevented you from pretty ugly data issues. Check 
your seed lists and system.peers on your hosts for IPs belonging to the wrong 
cluster, and remove them.

 

 

From: Abhishek Kumar Maheshwari 
Reply-To: "user@cassandra.apache.org" 
Date: Wednesday, December 14, 2016 at 3:10 AM
To: "user@cassandra.apache.org" 
Subject: Cassandra Different cluster gossiping to each other

 

Hi All,

 

I am getting below log in my system.log

 

 

GossipDigestSynVerbHandler.java:52 - ClusterName mismatch from /192.XXX.AA.133 
QA Columbia Cluster! = QA Columbia Cluster new

 

192.XXX.AA.133 Cluster name is QA Columbia Cluster

And on which server I am getting this error cluster name is: QA Columbia 
Cluster new

 

I am using apache-cassandra-2.2.3. Please let me know how I can fix this.

 

 

 

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)

Times Internet Ltd. | A Times of India Group Company

FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA

P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

 

A must visit exhibition for all Fitness and Sports Freaks. TOI Global Sports 
Business Show from 21 to 23 December 2016 Bombay Exhibition Centre, Mumbai. 
Meet the legends Kaizzad Capadia, Bhaichung Bhutia and more. Join the workshops 
on Boxing & Football and more. www.TOI-GSBS.com 



smime.p7s
Description: S/MIME cryptographic signature


Cassandra Different cluster gossiping to each other

2016-12-14 Thread Abhishek Kumar Maheshwari
Hi All,

I am getting below log in my system.log


GossipDigestSynVerbHandler.java:52 - ClusterName mismatch from /192.XXX.AA.133 
QA Columbia Cluster! = QA Columbia Cluster new

192.XXX.AA.133 Cluster name is QA Columbia Cluster
And on which server I am getting this error cluster name is: QA Columbia 
Cluster new

I am using apache-cassandra-2.2.3. Please let me know how I can fix this.



Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

A must visit exhibition for all Fitness and Sports Freaks. TOI Global Sports 
Business Show from 21 to 23 December 2016 Bombay Exhibition Centre, Mumbai. 
Meet the legends Kaizzad Capadia, Bhaichung Bhutia and more. Join the workshops 
on Boxing & Football and more. www.TOI-GSBS.com