Getting Error while Writing in Multi DC mode when Remote Dc is Down.

2017-01-23 Thread Abhishek Kumar Maheshwari
Hi All,

I have Cassandra stack with 2 Dc

Datacenter: DRPOCcluster

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  172.29.xx.xxx  88.88 GB   256  ?   
b6b8cbb9-1fed-471f-aea9-6a657e7ac80a  01
UN  172.29.xx.xxx  73.95 GB   256  ?   
604abbf5-8639-4104-8f60-fd6573fb2e17  03
UN  172.29. xx.xxx  66.42 GB   256  ?   
32fa79ee-93c6-4e5b-a910-f27a1e9d66c1  02
Datacenter: dc_india

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
DN  172.26. .xx.xxx  78.97 GB   256  ?   
3e8133ed-98b5-418d-96b5-690a1450cd30  RACK1
DN  172.26. .xx.xxx  79.18 GB   256  ?   
7d3f5b25-88f9-4be7-b0f5-746619153543  RACK2


I am using below code to connect with java driver:

cluster = 
Cluster.builder().addContactPoints(hostAddresses).withRetryPolicy(DefaultRetryPolicy.INSTANCE)
   .withReconnectionPolicy(new 
ConstantReconnectionPolicy(3L))
   .withLoadBalancingPolicy(new TokenAwarePolicy(new 
DCAwareRoundRobinPolicy.Builder().withLocalDc("DRPOCcluster").withUsedHostsPerRemoteDc(2).build())).build();
cluster.getConfiguration().getQueryOptions().setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);

hostAddresses is 172.29.xx.xxx  . when Dc with IP 172.26. .xx.xxx   is down, we 
are getting below exception :


Exception in thread "main" 
com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas 
available for query at consistency QUORUM (3 required but only 2 alive)
   at 
com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:109)
   at 
com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:27)
   at 
com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
   at 
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)

Cassandra version : 3.0.9
Datastax Java Driver Version:


com.datastax.cassandra
cassandra-driver-core
3.1.2



Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

We the soldiers of our new economy, pledge to stop doubting and start spending, 
to enable others to go digital, to use less cash. We pledge to 
#RemonetiseIndia. Join the Times Network 'Remonetise India' movement today. To 
pledge for growth, give a missed call on +91 9223515515. Visit 
www.remonetiseindia.com


RE: Getting Error while Writing in Multi DC mode when Remote Dc is Down.

2017-01-23 Thread Abhishek Kumar Maheshwari
Thanks, Benjamin,

I found the issue hints was off in Cassandra.yml.



Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: Monday, January 23, 2017 6:09 PM
To: user@cassandra.apache.org
Subject: Re: Getting Error while Writing in Multi DC mode when Remote Dc is 
Down.

Sorry for the short answer, I am on the run:
I guess your hints expired. Default setting is 3h. If a node is down for a 
longertime, no hints will be written.
Only a repair will help then.

2017-01-23 12:47 GMT+01:00 Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>:
Hi Benjamin,

I find the issue. while I was making query, I was overriding LOCAL_QUORUM to 
QUORUM.

Also, one more Question,

I was able insert data in DRPOCcluster. But when I bring up dc_india DC, data 
doesn’t seem in dc_india keyspace and column family (I wait near about 30 min)?




Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:+91%208%2005591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com<mailto:benjamin.r...@jaumo.com>]
Sent: Monday, January 23, 2017 5:05 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: Getting Error while Writing in Multi DC mode when Remote Dc is 
Down.

The query has QUORUM not LOCAL_QUORUM. So 3 of 5 nodes are required. Maybe 1 
node in DRPOCcluster also was temporarily unavailable during that query?

2017-01-23 12:16 GMT+01:00 Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>:
Hi All,

I have Cassandra stack with 2 Dc

Datacenter: DRPOCcluster

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  172.29.xx.xxx  88.88 GB   256  ?   
b6b8cbb9-1fed-471f-aea9-6a657e7ac80a  01
UN  172.29.xx.xxx  73.95 GB   256  ?   
604abbf5-8639-4104-8f60-fd6573fb2e17  03
UN  172.29. xx.xxx  66.42 GB   256  ?   
32fa79ee-93c6-4e5b-a910-f27a1e9d66c1  02
Datacenter: dc_india

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
DN  172.26. .xx.xxx  78.97 GB   256  ?   
3e8133ed-98b5-418d-96b5-690a1450cd30  RACK1
DN  172.26. .xx.xxx  79.18 GB   256  ?   
7d3f5b25-88f9-4be7-b0f5-746619153543  RACK2


I am using below code to connect with java driver:

cluster = 
Cluster.builder().addContactPoints(hostAddresses).withRetryPolicy(DefaultRetryPolicy.INSTANCE)
   .withReconnectionPolicy(new 
ConstantReconnectionPolicy(3L))
   .withLoadBalancingPolicy(new TokenAwarePolicy(new 
DCAwareRoundRobinPolicy.Builder().withLocalDc("DRPOCcluster").withUsedHostsPerRemoteDc(2).build())).build();
cluster.getConfiguration().getQueryOptions().setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);

hostAddresses is 172.29.xx.xxx  . when Dc with IP 172.26. .xx.xxx   is down, we 
are getting below exception :


Exception in thread "main" 
com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas 
available for query at consistency QUORUM (3 required but only 2 alive)
   at 
com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:109)
   at 
com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:27)
   at 
com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
   at 
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)

Cassandra version : 3.0.9
Datastax Java Driver Version:


com.datastax.cassandra
cassandra-driver-core
        3.1.2



Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:+91%208%2005591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

We the soldiers of our new economy, pledge to stop doubting and start spending, 
to enable others to go digital, to use less cash. We pledge to 
#RemonetiseIndia. Join the Times

RE: Getting Error while Writing in Multi DC mode when Remote Dc is Down.

2017-01-23 Thread Abhishek Kumar Maheshwari
Hi Benjamin,

I find the issue. while I was making query, I was overriding LOCAL_QUORUM to 
QUORUM.

Also, one more Question,

I was able insert data in DRPOCcluster. But when I bring up dc_india DC, data 
doesn’t seem in dc_india keyspace and column family (I wait near about 30 min)?




Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: Monday, January 23, 2017 5:05 PM
To: user@cassandra.apache.org
Subject: Re: Getting Error while Writing in Multi DC mode when Remote Dc is 
Down.

The query has QUORUM not LOCAL_QUORUM. So 3 of 5 nodes are required. Maybe 1 
node in DRPOCcluster also was temporarily unavailable during that query?

2017-01-23 12:16 GMT+01:00 Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>:
Hi All,

I have Cassandra stack with 2 Dc

Datacenter: DRPOCcluster

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  172.29.xx.xxx  88.88 GB   256  ?   
b6b8cbb9-1fed-471f-aea9-6a657e7ac80a  01
UN  172.29.xx.xxx  73.95 GB   256  ?   
604abbf5-8639-4104-8f60-fd6573fb2e17  03
UN  172.29. xx.xxx  66.42 GB   256  ?   
32fa79ee-93c6-4e5b-a910-f27a1e9d66c1  02
Datacenter: dc_india

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
DN  172.26. .xx.xxx  78.97 GB   256  ?   
3e8133ed-98b5-418d-96b5-690a1450cd30  RACK1
DN  172.26. .xx.xxx  79.18 GB   256  ?   
7d3f5b25-88f9-4be7-b0f5-746619153543  RACK2


I am using below code to connect with java driver:

cluster = 
Cluster.builder().addContactPoints(hostAddresses).withRetryPolicy(DefaultRetryPolicy.INSTANCE)
   .withReconnectionPolicy(new 
ConstantReconnectionPolicy(3L))
   .withLoadBalancingPolicy(new TokenAwarePolicy(new 
DCAwareRoundRobinPolicy.Builder().withLocalDc("DRPOCcluster").withUsedHostsPerRemoteDc(2).build())).build();
cluster.getConfiguration().getQueryOptions().setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);

hostAddresses is 172.29.xx.xxx  . when Dc with IP 172.26. .xx.xxx   is down, we 
are getting below exception :


Exception in thread "main" 
com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas 
available for query at consistency QUORUM (3 required but only 2 alive)
   at 
com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:109)
   at 
com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:27)
   at 
com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
   at 
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)

Cassandra version : 3.0.9
Datastax Java Driver Version:


com.datastax.cassandra
cassandra-driver-core
3.1.2
    


Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:+91%208%2005591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

We the soldiers of our new economy, pledge to stop doubting and start spending, 
to enable others to go digital, to use less cash. We pledge to 
#RemonetiseIndia. Join the Times Network ‘Remonetise India’ movement today. To 
pledge for growth, give a missed call on +91 
9223515515<tel:+91%2092235%2015515>. Visit 
www.remonetiseindia.com<http://www.remonetiseindia.com>



--
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com<http://www.jaumo.com>
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer


RE: [Multi DC] Old Data Not syncing from Existing cluster to new Cluster

2017-01-29 Thread Abhishek Kumar Maheshwari
But how I will tell rebuild command source DC if I have more than 2 Dc?

@dinking, yes I run the command, and it did some strange thing now:

Datacenter: DRPOCcluster

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  172.29.XX.XXX  140.16 GB  256  ?   
badf985b-37da-4735-b468-8d3a058d4b60  01
UN  172.29. XX.XXX  82.04 GB   256  ?   
317061b2-c19f-44ba-a776-bcd91c70bbdd  03
UN  172.29. XX.XXX  85.29 GB   256  ?   
9bf0d1dc-6826-4f3b-9c56-cec0c9ce3b6c  02
Datacenter: dc_india

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  172.26. XX.XXX   79.09 GB   256  ?   
3e8133ed-98b5-418d-96b5-690a1450cd30  RACK1
UN  172.26. XX.XXX   79.39 GB   256  ?   
7d3f5b25-88f9-4be7-b0f5-746619153543  RACK2



In source DC (dc_india) we have near about 79 GB data. But in new DC each node 
has more than 79 GB data and Seed IP have near about 2 times data. Below is 
replication:
Data Key Space:
alter KEYSPACE wls WITH replication = {'class': 'NetworkTopologyStrategy', 
'DRPOCcluster': '3','dc_india':'2'}  AND durable_writes = true;
alter KEYSPACE adlog WITH replication = {'class': 'NetworkTopologyStrategy', 
'DRPOCcluster': '3','dc_india':'2'}  AND durable_writes = true;

New DC('DRPOCcluster') system Key Space:

alter KEYSPACE system_distributed WITH replication = {'class': 
'NetworkTopologyStrategy', 'DRPOCcluster': '3','dc_india':'0'}  AND 
durable_writes = true;
alter KEYSPACE system_auth WITH replication = {'class': 
'NetworkTopologyStrategy', 'DRPOCcluster': '3','dc_india':'0'}  AND 
durable_writes = true;
alter KEYSPACE system_traces WITH replication = {'class': 
'NetworkTopologyStrategy', 'DRPOCcluster': '3','dc_india':'0'}  AND 
durable_writes = true;
alter KEYSPACE "OpsCenter" WITH replication = {'class': 
'NetworkTopologyStrategy', 'DRPOCcluster': '3','dc_india':'0'}  AND 
durable_writes = true;

Old  DC(‘dc_india’) system Key Space:

alter KEYSPACE system_distributed WITH replication = {'class': 
'NetworkTopologyStrategy', 'DRPOCcluster': '0','dc_india':'2'}  AND 
durable_writes = true;
alter KEYSPACE system_auth WITH replication = {'class': 
'NetworkTopologyStrategy', 'DRPOCcluster': '0','dc_india':'2'}  AND 
durable_writes = true;
alter KEYSPACE system_traces WITH replication = {'class': 
'NetworkTopologyStrategy', 'DRPOCcluster': '0','dc_india':'2'}  AND 
durable_writes = true;
alter KEYSPACE "OpsCenter" WITH replication = {'class': 
'NetworkTopologyStrategy', 'DRPOCcluster': '0','dc_india':'2'}  AND 
durable_writes = true;

why this happening? I did soething wrong?

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: kurt greaves [mailto:k...@instaclustr.com]
Sent: Saturday, January 28, 2017 3:27 AM
To: user@cassandra.apache.org
Subject: Re: [Multi DC] Old Data Not syncing from Existing cluster to new 
Cluster

What Dikang said, in your original email you are passing -dc to rebuild. This 
is incorrect. Simply run nodetool rebuild  from each of the nodes in 
the new dc.

On 28 Jan 2017 07:50, "Dikang Gu" 
<dikan...@gmail.com<mailto:dikan...@gmail.com>> wrote:
Have you run 'nodetool rebuild dc_india' on the new nodes?

On Tue, Jan 24, 2017 at 7:51 AM, Benjamin Roth 
<benjamin.r...@jaumo.com<mailto:benjamin.r...@jaumo.com>> wrote:
Have you also altered RF of system_distributed as stated in the tutorial?

2017-01-24 16:45 GMT+01:00 Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>:
My Mistake,

Both clusters are up and running.

Datacenter: DRPOCcluster

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  172.29.XX.XX  1.65 GB   256  ?   
badf985b-37da-4735-b468-8d3a058d4b60  01
UN  172.29.XX.XX  1.64 GB   256  ?   
317061b2-c19f-44ba-a776-bcd91c70bbdd  03
UN  172.29.XX.XX  1.64 GB   256  ?   
9bf0d1dc-6826-4f3b-9c56-cec0c9ce3b6c  02
Datacenter: dc_india

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  172.26.XX.XX   79.90 GB   256  ?   
3e8133ed-98b5-418d-96b5-690a1450cd30  RACK1
UN  172.26.XX.XX   80.21 GB   256  ?   
7d3f5b25-88f9-4be7-b0f5-746619153543  RACK2

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:+91%2099

[Multi DC] Old Data Not syncing from Existing cluster to new Cluster

2017-01-24 Thread Abhishek Kumar Maheshwari
Hi All,

I have Cassandra stack with 2 Dc

Datacenter: DRPOCcluster

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  172.29.xx.xxx  256  MB   256  ?   
b6b8cbb9-1fed-471f-aea9-6a657e7ac80a  01
UN  172.29.xx.xxx  240 MB   256  ?   
604abbf5-8639-4104-8f60-fd6573fb2e17  03
UN  172.29. xx.xxx  240 MB   256  ?   
32fa79ee-93c6-4e5b-a910-f27a1e9d66c1  02
Datacenter: dc_india

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
DN  172.26. .xx.xxx  78.97 GB   256  ?   
3e8133ed-98b5-418d-96b5-690a1450cd30  RACK1
DN  172.26. .xx.xxx  79.18 GB   256  ?   
7d3f5b25-88f9-4be7-b0f5-746619153543  RACK2

dc_india is old Dc which contains all data.
I update keyspace as per below:

alter KEYSPACE wls WITH replication = {'class': 'NetworkTopologyStrategy', 
'DRPOCcluster': '2','dc_india':'2'}  AND durable_writes = true;

but old data is not updating in DRPOCcluster(which is new). Also, while running 
nodetool rebuild getting below exception:
Cammand: ./nodetool rebuild -dc dc_india

Exception : nodetool: Unable to find sufficient sources for streaming range 
(-875697427424852,-8755484427030035332] in keyspace system_distributed

Cassandra version : 3.0.9


Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.



RE: [Multi DC] Old Data Not syncing from Existing cluster to new Cluster

2017-01-24 Thread Abhishek Kumar Maheshwari
Yes, I take all steps. While I am inserting new data is replicating on both DC. 
But only old data is not replication in new cluster.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: Tuesday, January 24, 2017 8:55 PM
To: user@cassandra.apache.org
Subject: Re: [Multi DC] Old Data Not syncing from Existing cluster to new 
Cluster

There is much more to it than just changing the RF in the keyspace!

See here: 
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html

2017-01-24 16:18 GMT+01:00 Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>:
Hi All,

I have Cassandra stack with 2 Dc

Datacenter: DRPOCcluster

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  172.29.xx.xxx  256  MB   256  ?   
b6b8cbb9-1fed-471f-aea9-6a657e7ac80a  01
UN  172.29.xx.xxx  240 MB   256  ?   
604abbf5-8639-4104-8f60-fd6573fb2e17  03
UN  172.29. xx.xxx  240 MB   256  ?   
32fa79ee-93c6-4e5b-a910-f27a1e9d66c1  02
Datacenter: dc_india

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
DN  172.26. .xx.xxx  78.97 GB   256  ?   
3e8133ed-98b5-418d-96b5-690a1450cd30  RACK1
DN  172.26. .xx.xxx  79.18 GB   256  ?   
7d3f5b25-88f9-4be7-b0f5-746619153543  RACK2

dc_india is old Dc which contains all data.
I update keyspace as per below:

alter KEYSPACE wls WITH replication = {'class': 'NetworkTopologyStrategy', 
'DRPOCcluster': '2','dc_india':'2'}  AND durable_writes = true;

but old data is not updating in DRPOCcluster(which is new). Also, while running 
nodetool rebuild getting below exception:
Cammand: ./nodetool rebuild -dc dc_india

Exception : nodetool: Unable to find sufficient sources for streaming range 
(-875697427424852,-8755484427030035332] in keyspace system_distributed

Cassandra version : 3.0.9


Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:+91%208%2005591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.




--
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com<http://www.jaumo.com>
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer


RE: [Multi DC] Old Data Not syncing from Existing cluster to new Cluster

2017-01-24 Thread Abhishek Kumar Maheshwari
My Mistake,

Both clusters are up and running.

Datacenter: DRPOCcluster

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  172.29.XX.XX  1.65 GB   256  ?   
badf985b-37da-4735-b468-8d3a058d4b60  01
UN  172.29.XX.XX  1.64 GB   256  ?   
317061b2-c19f-44ba-a776-bcd91c70bbdd  03
UN  172.29.XX.XX  1.64 GB   256  ?   
9bf0d1dc-6826-4f3b-9c56-cec0c9ce3b6c  02
Datacenter: dc_india

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  172.26.XX.XX   79.90 GB   256  ?   
3e8133ed-98b5-418d-96b5-690a1450cd30  RACK1
UN  172.26.XX.XX   80.21 GB   256  ?   
7d3f5b25-88f9-4be7-b0f5-746619153543  RACK2

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: Tuesday, January 24, 2017 9:11 PM
To: user@cassandra.apache.org
Subject: Re: [Multi DC] Old Data Not syncing from Existing cluster to new 
Cluster

I am not an expert in bootstrapping new DCs but shouldn't the OLD nodes appear 
as UP to be used as a streaming source in rebuild?

2017-01-24 16:32 GMT+01:00 Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>:
Yes, I take all steps. While I am inserting new data is replicating on both DC. 
But only old data is not replication in new cluster.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:+91%208%2005591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com<mailto:benjamin.r...@jaumo.com>]
Sent: Tuesday, January 24, 2017 8:55 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: [Multi DC] Old Data Not syncing from Existing cluster to new 
Cluster

There is much more to it than just changing the RF in the keyspace!

See here: 
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html

2017-01-24 16:18 GMT+01:00 Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>:
Hi All,

I have Cassandra stack with 2 Dc

Datacenter: DRPOCcluster

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
UN  172.29.xx.xxx  256  MB   256  ?   
b6b8cbb9-1fed-471f-aea9-6a657e7ac80a  01
UN  172.29.xx.xxx  240 MB   256  ?   
604abbf5-8639-4104-8f60-fd6573fb2e17  03
UN  172.29. xx.xxx  240 MB   256  ?   
32fa79ee-93c6-4e5b-a910-f27a1e9d66c1  02
Datacenter: dc_india

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   OwnsHost ID  
 Rack
DN  172.26. .xx.xxx  78.97 GB   256  ?   
3e8133ed-98b5-418d-96b5-690a1450cd30  RACK1
DN  172.26. .xx.xxx  79.18 GB   256  ?   
7d3f5b25-88f9-4be7-b0f5-746619153543  RACK2

dc_india is old Dc which contains all data.
I update keyspace as per below:

alter KEYSPACE wls WITH replication = {'class': 'NetworkTopologyStrategy', 
'DRPOCcluster': '2','dc_india':'2'}  AND durable_writes = true;

but old data is not updating in DRPOCcluster(which is new). Also, while running 
nodetool rebuild getting below exception:
Cammand: ./nodetool rebuild -dc dc_india

Exception : nodetool: Unable to find sufficient sources for streaming range 
(-875697427424852,-8755484427030035332] in keyspace system_distributed

Cassandra version : 3.0.9


Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:+91%208%2005591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.




--
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com<http://www.jaumo.com>
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6<tel:07161%203048806> · Fax +49 7161 
304880-1<tel:07161%203048801>
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer



--
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com<http://www.jaumo.com>
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer


Cassandra Config as per server hardware for heavy write

2016-11-22 Thread Abhishek Kumar Maheshwari
Hi,

I have 8 servers in my Cassandra Cluster. Each server has 64 GB ram and 40 
Cores and 8 SSD. Currently I have below config in Cassandra.yaml:

concurrent_reads: 32
concurrent_writes: 64
concurrent_counter_writes: 32
compaction_throughput_mb_per_sec: 32
concurrent_compactors: 8

With this configuration, I can write 1700 Request/Sec per server.

But our desired write performance is 3000-4000 Request/Sec per server. As per 
my Understanding Max value for these parameters can be as below:
concurrent_reads: 32
concurrent_writes: 128(8*16 Corew)
concurrent_counter_writes: 32
compaction_throughput_mb_per_sec: 128
concurrent_compactors: 8 or 16 (as I have 8 SSD and 16 core reserve for this)

Please let me know this is fine or I need to tune some other parameters for 
speedup write.


Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

Education gets Exciting with IIM Kozhikode Executive Post Graduate Programme in 
Management - 2 years (AMBA accredited with full benefits of IIMK Alumni 
status). Brought to you by IIMK in association with TSW, an Executive Education 
initiative from The Times of India Group. Learn more: www.timestsw.com


RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
No I am using 100 threads.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Vladimir Yudovin [mailto:vla...@winguzone.com]
Sent: Wednesday, November 23, 2016 2:00 PM
To: user <user@cassandra.apache.org>
Subject: RE: Cassandra Config as per server hardware for heavy write

>I have 1Cr records in my Java ArrayList and yes I am writing in sync mode.
Is your Java program single threaded?

Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting, Zero 
production time


 On Wed, 23 Nov 2016 03:09:29 -0500Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote 

Hi Benjamin,

I have 1Cr records in my Java ArrayList and yes I am writing in sync mode. My 
table is as below:

CREATE TABLE XXX_YY_MMS (
date timestamp,
userid text,
time timestamp,
xid text,
addimid text,
advcid bigint,
algo bigint,
alla text,
aud text,
bmid text,
ctyid text,
bid double,
ctxid text,
devipid text,
gmid text,
ip text,
itcid bigint,
iid text,
metid bigint,
osdid text,
paid int,
position text,
pcid bigint,
refurl text,
sec text,
siid bigint,
tmpid bigint,
xforwardedfor text,
PRIMARY KEY (date, userid, time, xid)
) WITH CLUSTERING ORDER BY (userid ASC, time ASC, xid ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

So please let me know what I miss?

And for this hardware below config is fine?

concurrent_reads: 32
concurrent_writes: 64
concurrent_counter_writes: 32
compaction_throughput_mb_per_sec: 32
concurrent_compactors: 8

thanks,
Abhishek

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com<mailto:benjamin.r...@jaumo.com>]
Sent: Wednesday, November 23, 2016 12:56 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: Cassandra Config as per server hardware for heavy write

This is ridiculously slow for that hardware setup. Sounds like you benchmark 
with a single thread and / or sync queries or very large writes.
A setup like this should be easily able to handle tens of thousands of writes / 
s

2016-11-23 8:02 GMT+01:00 Jonathan Haddad 
<j...@jonhaddad.com<mailto:j...@jonhaddad.com>>:
How are you benchmarking that?
On Tue, Nov 22, 2016 at 9:16 PM Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote:
Hi,

I have 8 servers in my Cassandra Cluster. Each server has 64 GB ram and 40 
Cores and 8 SSD. Currently I have below config in Cassandra.yaml:

concurrent_reads: 32
concurrent_writes: 64
concurrent_counter_writes: 32
compaction_throughput_mb_per_sec: 32
concurrent_compactors: 8

With this configuration, I can write 1700 Request/Sec per server.

But our desired write performance is 3000-4000 Request/Sec per server. As per 
my Understanding Max value for these parameters can be as below:
concurrent_reads: 32
concurrent_writes: 128(8*16 Corew)
concurrent_counter_writes: 32
compaction_throughput_mb_per_sec: 128
concurrent_compactors: 8 or 16 (as I have 8 SSD and 16 core reserve for this)

Please let me know this is fine or I need to tune some other parameters for 
speedup write.


Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

Education gets Exciting with IIM Kozhikode Executive Post Graduate Programme in 
Management - 2 years (AMBA accredited with full benefits of IIMK Alumni 
status). Brought to you by IIMK in association with TSW, an Executive Education 
initiative from The Times of India Group. Learn more: 
www.timestsw.com<http://www.timestsw.com>





--

Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com<http://www.jaumo.com>
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer





RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
Hi Benjamin,

I have 1Cr records in my Java ArrayList and yes I am writing in sync mode. My 
table is as below:

CREATE TABLE XXX_YY_MMS (
date timestamp,
userid text,
time timestamp,
xid text,
addimid text,
advcid bigint,
algo bigint,
alla text,
aud text,
bmid text,
ctyid text,
bid double,
ctxid text,
devipid text,
gmid text,
ip text,
itcid bigint,
iid text,
metid bigint,
osdid text,
paid int,
position text,
pcid bigint,
refurl text,
sec text,
siid bigint,
tmpid bigint,
xforwardedfor text,
PRIMARY KEY (date, userid, time, xid)
) WITH CLUSTERING ORDER BY (userid ASC, time ASC, xid ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

So please let me know what I miss?

And for this hardware below config is fine?

concurrent_reads: 32
concurrent_writes: 64
concurrent_counter_writes: 32
compaction_throughput_mb_per_sec: 32
concurrent_compactors: 8

thanks,
Abhishek

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: Wednesday, November 23, 2016 12:56 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra Config as per server hardware for heavy write

This is ridiculously slow for that hardware setup. Sounds like you benchmark 
with a single thread and / or sync queries or very large writes.
A setup like this should be easily able to handle tens of thousands of writes / 
s

2016-11-23 8:02 GMT+01:00 Jonathan Haddad 
<j...@jonhaddad.com<mailto:j...@jonhaddad.com>>:
How are you benchmarking that?
On Tue, Nov 22, 2016 at 9:16 PM Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote:
Hi,

I have 8 servers in my Cassandra Cluster. Each server has 64 GB ram and 40 
Cores and 8 SSD. Currently I have below config in Cassandra.yaml:

concurrent_reads: 32
concurrent_writes: 64
concurrent_counter_writes: 32
compaction_throughput_mb_per_sec: 32
concurrent_compactors: 8

With this configuration, I can write 1700 Request/Sec per server.

But our desired write performance is 3000-4000 Request/Sec per server. As per 
my Understanding Max value for these parameters can be as below:
concurrent_reads: 32
concurrent_writes: 128(8*16 Corew)
concurrent_counter_writes: 32
compaction_throughput_mb_per_sec: 128
concurrent_compactors: 8 or 16 (as I have 8 SSD and 16 core reserve for this)

Please let me know this is fine or I need to tune some other parameters for 
speedup write.


Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:%2B91-%C2%A0805591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

Education gets Exciting with IIM Kozhikode Executive Post Graduate Programme in 
Management - 2 years (AMBA accredited with full benefits of IIMK Alumni 
status). Brought to you by IIMK in association with TSW, an Executive Education 
initiative from The Times of India Group. Learn more: 
www.timestsw.com<http://www.timestsw.com>



--
Benjamin Roth
Prokurist

Jaumo GmbH · www.jaumo.com<http://www.jaumo.com>
Wehrstraße 46 · 73035 Göppingen · Germany
Phone +49 7161 304880-6 · Fax +49 7161 304880-1
AG Ulm · HRB 731058 · Managing Director: Jens Kammerer


RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
Hi Siddharth,

For me it seems Cassandra side. Because I have a list with 1cr record. I am 
just iterating on it and executing the query.
Also, I try with 200 thread but still speed doesn’t increase that much as 
expected. On grafana write latency is near about 10Ms.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: siddharth verma [mailto:sidd.verma29.l...@gmail.com]
Sent: Wednesday, November 23, 2016 2:23 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra Config as per server hardware for heavy write

Hi Abhishek,
You could check whether you are throttling on client side queries or on 
cassandra side.
You could also use grafana to monitor the cluster as well.
As you said, you are using 100 threads, it can't be sure whether you are 
throttling cassandra cluster to its max limit.

As Benjamin suggested, you could use cassandra stress tool.

Lastly, if after everything( and you are sure, that cassandra seems slow) the 
TPS comes out to be the numbers as you suggested, you could check you schema, 
many rows in one partition key, read queries, read write load, write queries 
with Batch/LWT, compactions running etc.


For checking ONLY cassandra throughput, you could use cassandra-stress with any 
schema of your choice.

Regards


On Wed, Nov 23, 2016 at 2:07 PM, Vladimir Yudovin 
<vla...@winguzone.com<mailto:vla...@winguzone.com>> wrote:
So do you see speed write saturation at this number of thread? Does doubling to 
200 bring increase?


Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting, Zero 
production time


 On Wed, 23 Nov 2016 03:31:32 -0500Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote 

No I am using 100 threads.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Vladimir Yudovin 
[mailto:vla...@winguzone.com<mailto:vla...@winguzone.com>]
Sent: Wednesday, November 23, 2016 2:00 PM
To: user <user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: RE: Cassandra Config as per server hardware for heavy write

>I have 1Cr records in my Java ArrayList and yes I am writing in sync mode.
Is your Java program single threaded?

Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting, Zero 
production time


 On Wed, 23 Nov 2016 03:09:29 -0500Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote 

Hi Benjamin,

I have 1Cr records in my Java ArrayList and yes I am writing in sync mode. My 
table is as below:

CREATE TABLE XXX_YY_MMS (
date timestamp,
userid text,
time timestamp,
xid text,
addimid text,
advcid bigint,
algo bigint,
alla text,
aud text,
bmid text,
ctyid text,
bid double,
ctxid text,
devipid text,
gmid text,
ip text,
itcid bigint,
iid text,
metid bigint,
osdid text,
paid int,
position text,
pcid bigint,
refurl text,
sec text,
siid bigint,
tmpid bigint,
xforwardedfor text,
PRIMARY KEY (date, userid, time, xid)
) WITH CLUSTERING ORDER BY (userid ASC, time ASC, xid ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io<http://org.apache.cassandra.io>.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

So please let me know what I miss?

And for this hardware below config is fine?

concurrent_reads: 32
concurrent_writes: 64
concurrent_counter_writes: 32
compaction_throughput_mb_per_sec: 32
concurrent_compactors: 8

thanks,
Abhishek

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com<mailto:benjamin.r...@jaumo.com>]
Sent: Wednesday, November 23, 2016 12:56 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: Cassandra Config as per server hardware for heavy write

This is ridiculously

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
But I need to do it in sync mode as per business requirement. If something went 
wrong then it should be replayle. That’s why I am using sync mode.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Vladimir Yudovin [mailto:vla...@winguzone.com]
Sent: Wednesday, November 23, 2016 3:47 PM
To: user <user@cassandra.apache.org>
Subject: RE: Cassandra Config as per server hardware for heavy write

session.execute is coming from Session session = cluster.connect(); I guess?

So actually all threads work with the same TCP connection. It's worth to try 
async API with Connection Pool.

Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting


 On Wed, 23 Nov 2016 04:49:18 -0500Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote 

Hi

I am submitting record to Executor service and below is my client config and 
code:

cluster = Cluster.builder().addContactPoints(hostAddresses)
   .withRetryPolicy(DefaultRetryPolicy.INSTANCE)
   .withReconnectionPolicy(new 
ConstantReconnectionPolicy(3L))
   .withLoadBalancingPolicy(new TokenAwarePolicy(new 
DCAwareRoundRobinPolicy()))
   .build();

   ExecutorService service=Executors.newFixedThreadPool(1000);
for(final AdLog adLog:li){
service.submit(()->{
session.execute(ktest.adImprLogToStatement(adLog.getAdLogType(),adLog.getAdImprLog()));
inte.incrementAndGet();
 });
  }



Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Vladimir Yudovin 
[mailto:vla...@winguzone.com<mailto:vla...@winguzone.com>]
Sent: Wednesday, November 23, 2016 3:15 PM
To: user <user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: RE: Cassandra Config as per server hardware for heavy write

>I have a list with 1cr record. I am just iterating on it and executing the 
>query. Also, I try with 200 thread
Do you fetch each list item and put it to separate thread to perform CQL query? 
Also how exactly do you connect to Cassandra?
If you use synchronous API so it's better to create connection pool (with 
TokenAwarePolicy each) and then pass each item to separate thread.


Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting


 On Wed, 23 Nov 2016 04:23:13 -0500Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote 

Hi Siddharth,

For me it seems Cassandra side. Because I have a list with 1cr record. I am 
just iterating on it and executing the query.
Also, I try with 200 thread but still speed doesn’t increase that much as 
expected. On grafana write latency is near about 10Ms.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: siddharth verma 
[mailto:sidd.verma29.l...@gmail.com<mailto:sidd.verma29.l...@gmail.com>]
Sent: Wednesday, November 23, 2016 2:23 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: Cassandra Config as per server hardware for heavy write

Hi Abhishek,
You could check whether you are throttling on client side queries or on 
cassandra side.
You could also use grafana to monitor the cluster as well.
As you said, you are using 100 threads, it can't be sure whether you are 
throttling cassandra cluster to its max limit.

As Benjamin suggested, you could use cassandra stress tool.

Lastly, if after everything( and you are sure, that cassandra seems slow) the 
TPS comes out to be the numbers as you suggested, you could check you schema, 
many rows in one partition key, read queries, read write load, write queries 
with Batch/LWT, compactions running etc.


For checking ONLY cassandra throughput, you could use cassandra-stress with any 
schema of your choice.

Regards


On Wed, Nov 23, 2016 at 2:07 PM, Vladimir Yudovin 
<vla...@winguzone.com<mailto:vla...@winguzone.com>> wrote:
So do you see speed write saturation at this number of thread? Does doubling to 
200 bring increase?


Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting, Zero 
production

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
Hi

I am submitting record to Executor service and below is my client config and 
code:

cluster = Cluster.builder().addContactPoints(hostAddresses)
   .withRetryPolicy(DefaultRetryPolicy.INSTANCE)
   .withReconnectionPolicy(new 
ConstantReconnectionPolicy(3L))
   .withLoadBalancingPolicy(new TokenAwarePolicy(new 
DCAwareRoundRobinPolicy()))
   .build();

   ExecutorService service=Executors.newFixedThreadPool(1000);
  for(final AdLog adLog:li){
 service.submit(()->{

session.execute(ktest.adImprLogToStatement(adLog.getAdLogType(),adLog.getAdImprLog()));
   inte.incrementAndGet();
 });
  }



Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Vladimir Yudovin [mailto:vla...@winguzone.com]
Sent: Wednesday, November 23, 2016 3:15 PM
To: user <user@cassandra.apache.org>
Subject: RE: Cassandra Config as per server hardware for heavy write

>I have a list with 1cr record. I am just iterating on it and executing the 
>query. Also, I try with 200 thread
Do you fetch each list item and put it to separate thread to perform CQL query? 
Also how exactly do you connect to Cassandra?
If you use synchronous API so it's better to create connection pool (with 
TokenAwarePolicy each) and then pass each item to separate thread.


Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting


 On Wed, 23 Nov 2016 04:23:13 -0500Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote 

Hi Siddharth,

For me it seems Cassandra side. Because I have a list with 1cr record. I am 
just iterating on it and executing the query.
Also, I try with 200 thread but still speed doesn’t increase that much as 
expected. On grafana write latency is near about 10Ms.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: siddharth verma 
[mailto:sidd.verma29.l...@gmail.com<mailto:sidd.verma29.l...@gmail.com>]
Sent: Wednesday, November 23, 2016 2:23 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: Cassandra Config as per server hardware for heavy write

Hi Abhishek,
You could check whether you are throttling on client side queries or on 
cassandra side.
You could also use grafana to monitor the cluster as well.
As you said, you are using 100 threads, it can't be sure whether you are 
throttling cassandra cluster to its max limit.

As Benjamin suggested, you could use cassandra stress tool.

Lastly, if after everything( and you are sure, that cassandra seems slow) the 
TPS comes out to be the numbers as you suggested, you could check you schema, 
many rows in one partition key, read queries, read write load, write queries 
with Batch/LWT, compactions running etc.


For checking ONLY cassandra throughput, you could use cassandra-stress with any 
schema of your choice.

Regards


On Wed, Nov 23, 2016 at 2:07 PM, Vladimir Yudovin 
<vla...@winguzone.com<mailto:vla...@winguzone.com>> wrote:
So do you see speed write saturation at this number of thread? Does doubling to 
200 bring increase?


Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting, Zero 
production time


 On Wed, 23 Nov 2016 03:31:32 -0500Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote 

No I am using 100 threads.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Vladimir Yudovin 
[mailto:vla...@winguzone.com<mailto:vla...@winguzone.com>]
Sent: Wednesday, November 23, 2016 2:00 PM
To: user <user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: RE: Cassandra Config as per server hardware for heavy write

>I have 1Cr records in my Java ArrayList and yes I am writing in sync mode.
Is your Java program single threaded?

Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting, Zero 
production time


 On Wed, 23 Nov 2016 03:09:29 -0500Abhishek Kumar Maheshwari 
<abh

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
Hi Vladimir,

I try the same but it doesn’t increase. also in grafan average write latency is 
near about 10Ms.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Vladimir Yudovin [mailto:vla...@winguzone.com]
Sent: Wednesday, November 23, 2016 2:07 PM
To: user <user@cassandra.apache.org>
Subject: RE: Cassandra Config as per server hardware for heavy write

So do you see speed write saturation at this number of thread? Does doubling to 
200 bring increase?


Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting, Zero 
production time


 On Wed, 23 Nov 2016 03:31:32 -0500Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote 

No I am using 100 threads.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Vladimir Yudovin 
[mailto:vla...@winguzone.com<mailto:vla...@winguzone.com>]
Sent: Wednesday, November 23, 2016 2:00 PM
To: user <user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: RE: Cassandra Config as per server hardware for heavy write

>I have 1Cr records in my Java ArrayList and yes I am writing in sync mode.
Is your Java program single threaded?

Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting, Zero 
production time


 On Wed, 23 Nov 2016 03:09:29 -0500Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote 

Hi Benjamin,

I have 1Cr records in my Java ArrayList and yes I am writing in sync mode. My 
table is as below:

CREATE TABLE XXX_YY_MMS (
date timestamp,
userid text,
time timestamp,
xid text,
addimid text,
advcid bigint,
algo bigint,
alla text,
aud text,
bmid text,
ctyid text,
bid double,
ctxid text,
devipid text,
gmid text,
ip text,
itcid bigint,
iid text,
metid bigint,
osdid text,
paid int,
position text,
pcid bigint,
refurl text,
sec text,
siid bigint,
tmpid bigint,
xforwardedfor text,
PRIMARY KEY (date, userid, time, xid)
) WITH CLUSTERING ORDER BY (userid ASC, time ASC, xid ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

So please let me know what I miss?

And for this hardware below config is fine?

concurrent_reads: 32
concurrent_writes: 64
concurrent_counter_writes: 32
compaction_throughput_mb_per_sec: 32
concurrent_compactors: 8

thanks,
Abhishek

From: Benjamin Roth 
[mailto:benjamin.r...@jaumo.com<mailto:benjamin.r...@jaumo.com>]
Sent: Wednesday, November 23, 2016 12:56 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: Cassandra Config as per server hardware for heavy write

This is ridiculously slow for that hardware setup. Sounds like you benchmark 
with a single thread and / or sync queries or very large writes.
A setup like this should be easily able to handle tens of thousands of writes / 
s

2016-11-23 8:02 GMT+01:00 Jonathan Haddad 
<j...@jonhaddad.com<mailto:j...@jonhaddad.com>>:
How are you benchmarking that?
On Tue, Nov 22, 2016 at 9:16 PM Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote:
Hi,

I have 8 servers in my Cassandra Cluster. Each server has 64 GB ram and 40 
Cores and 8 SSD. Currently I have below config in Cassandra.yaml:

concurrent_reads: 32
concurrent_writes: 64
concurrent_counter_writes: 32
compaction_throughput_mb_per_sec: 32
concurrent_compactors: 8

With this configuration, I can write 1700 Request/Sec per server.

But our desired write performance is 3000-4000 Request/Sec per server. As per 
my Understanding Max value for these parameters can be as below:
concurrent_reads: 32
concurrent_writes: 128(

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
Yes, i also try with async mode but I got max speed on 2500 request/sec per 
server.

   ExecutorService service=Executors.newFixedThreadPool(1000);
  for(final AdLog adLog:li){
 service.submit(()->{
  
session.executeAsync(ktest.adImprLogToStatement(adLog.getAdLogType(),adLog.getAdImprLog()));
   inte.incrementAndGet();
 });
  }

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Benjamin Roth [mailto:benjamin.r...@jaumo.com]
Sent: Wednesday, November 23, 2016 4:09 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra Config as per server hardware for heavy write

This has nothing to do with sync/async operations. An async operation is also 
replayable. You receive the result in a future instead.
Have you ever dealt with async programming techniques like promises, futures, 
callbacks?
Async programming does not change the fact that you get a result of your 
operation only WHERE and WHEN.
Doing sync operations means the result is available in the "next line of code" 
whereas async operation means that some handler is called when the result is 
there.

There are tons of articles around this in the web.

2016-11-23 11:29 GMT+01:00 Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>:
But I need to do it in sync mode as per business requirement. If something went 
wrong then it should be replayle. That’s why I am using sync mode.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:%2B91-%C2%A0805591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Vladimir Yudovin 
[mailto:vla...@winguzone.com<mailto:vla...@winguzone.com>]
Sent: Wednesday, November 23, 2016 3:47 PM
To: user <user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: RE: Cassandra Config as per server hardware for heavy write

session.execute is coming from Session session = cluster.connect(); I guess?

So actually all threads work with the same TCP connection. It's worth to try 
async API with Connection Pool.

Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting


 On Wed, 23 Nov 2016 04:49:18 -0500Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote 

Hi

I am submitting record to Executor service and below is my client config and 
code:

cluster = Cluster.builder().addContactPoints(hostAddresses)
   .withRetryPolicy(DefaultRetryPolicy.INSTANCE)
   .withReconnectionPolicy(new 
ConstantReconnectionPolicy(3L))
   .withLoadBalancingPolicy(new TokenAwarePolicy(new 
DCAwareRoundRobinPolicy()))
   .build();

   ExecutorService service=Executors.newFixedThreadPool(1000);
for(final AdLog adLog:li){
service.submit(()->{
session.execute(ktest.adImprLogToStatement(adLog.getAdLogType(),adLog.getAdImprLog()));
inte.incrementAndGet();
 });
  }



Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:%2B91-%C2%A0805591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Vladimir Yudovin 
[mailto:vla...@winguzone.com<mailto:vla...@winguzone.com>]
Sent: Wednesday, November 23, 2016 3:15 PM
To: user <user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: RE: Cassandra Config as per server hardware for heavy write

>I have a list with 1cr record. I am just iterating on it and executing the 
>query. Also, I try with 200 thread
Do you fetch each list item and put it to separate thread to perform CQL query? 
Also how exactly do you connect to Cassandra?
If you use synchronous API so it's better to create connection pool (with 
TokenAwarePolicy each) and then pass each item to separate thread.


Best regards, Vladimir Yudovin,
Winguzone<https://winguzone.com?from=list> - Cloud Cassandra Hosting


 On Wed, 23 Nov 2016 04:23:13 -0500Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote 

Hi Siddharth,

For me it seems Cassandra side. Because I have a list with 1cr record. I am 
just iterating on it and executing the query

Cassandra Multi DC with diff version.

2016-11-27 Thread Abhishek Kumar Maheshwari
Hi All,

We have 2 Cassandra DC with below config:

DC1: In DC1 we have 9 Servers with 64 GB ram 40 Core machines. In this DC we 
have Cassandra version: 2.1.4. And We have 2 TB data on each server. 
Application is connected with DC.
DC2: In DC2 we have 5 Servers with 64 GB ram 40 Core machines. In this DC we 
have Cassandra version: 3.0.9.

My Question is both DC will be in sync perfectly?

What will happen if I will use LOCAL_QURAM on both DC with same queries?

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

Education gets Exciting with IIM Kozhikode Executive Post Graduate Programme in 
Management - 2 years (AMBA accredited with full benefits of IIMK Alumni 
status). Brought to you by IIMK in association with TSW, an Executive Education 
initiative from The Times of India Group. Learn more: www.timestsw.com


Cassandra Different cluster gossiping to each other

2016-12-14 Thread Abhishek Kumar Maheshwari
Hi All,

I am getting below log in my system.log


GossipDigestSynVerbHandler.java:52 - ClusterName mismatch from /192.XXX.AA.133 
QA Columbia Cluster! = QA Columbia Cluster new

192.XXX.AA.133 Cluster name is QA Columbia Cluster
And on which server I am getting this error cluster name is: QA Columbia 
Cluster new

I am using apache-cassandra-2.2.3. Please let me know how I can fix this.



Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

A must visit exhibition for all Fitness and Sports Freaks. TOI Global Sports 
Business Show from 21 to 23 December 2016 Bombay Exhibition Centre, Mumbai. 
Meet the legends Kaizzad Capadia, Bhaichung Bhutia and more. Join the workshops 
on Boxing & Football and more. www.TOI-GSBS.com


[Error in Cassandra Log] Unexpected exception during request

2017-01-02 Thread Abhishek Kumar Maheshwari
Hi all,

Currently I am using Cassandra version 3.0.9 and Datastax Driver 3.1.2.

I am running application on same server where Cassandra is running. I am able 
to insert data in Cassandra but also I am getting below error in Cassandra log.

INFO  [SharedPool-Worker-1] 2017-01-02 15:16:36,166 Message.java:611 - 
Unexpected exception during request; channel = [id: 0x5b0c467d, 
/XXX.XX.18.59:22763 :> /XXX.XX.18.59:9042]
java.io.IOException: Error while read(...): Connection reset by peer
at io.netty.channel.epoll.Native.readAddress(Native Method) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]


The Second Wind (TSW), an Executive Education initiative of the Times of India 
Group, announces Open Programs in Mumbai, Delhi and Bangalore by experts like 
Marshall Goldsmith, Jamie Anderson and more. The Pivot Series from TSW has been 
crafted to offer innovative leadership techniques. Learn more: 
http://timestsw.com/open-programs/


Multi DC Production ready Cassandra version

2016-12-18 Thread Abhishek Kumar Maheshwari
Hi All,

Please let me know which version of Cassandra I can use in MULTI DC Cassandra 
clusters.



Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

A must visit exhibition for all Fitness and Sports Freaks. TOI Global Sports 
Business Show from 21 to 23 December 2016 Bombay Exhibition Centre, Mumbai. 
Meet the legends Kaizzad Capadia, Bhaichung Bhutia and more. Join the workshops 
on Boxing & Football and more. www.TOI-GSBS.com


RE: Cassandra cluster performance

2016-12-25 Thread Abhishek Kumar Maheshwari
Hi Branislav,


What is your column family definition?


Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Branislav Janosik -T (bjanosik - AAP3 INC at Cisco) 
[mailto:bjano...@cisco.com]
Sent: Thursday, December 22, 2016 6:18 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra cluster performance

Hi,

- Consistency level is set to ONE
-  Keyspace definition:

"CREATE KEYSPACE  IF NOT EXISTS  onem2m " +
"WITH replication = " +
"{ 'class' : 'SimpleStrategy', 'replication_factor' : 1}";



- yes, the client is on separate VM

- In our project we use Cassandra API version 3.0.2 but the database (cluster) 
is version 3.9

- for 2node cluster:

 first VM: 25 GB RAM, 16 CPUs

 second VM: 16 GB RAM, 16 CPUs




From: Ben Slater <ben.sla...@instaclustr.com<mailto:ben.sla...@instaclustr.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Date: Wednesday, December 21, 2016 at 2:32 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: Re: Cassandra cluster performance

You would expect some drop when moving to single multiple nodes but on the face 
of it that feels extreme to me (although I’ve never personally tested the 
difference). Some questions that might help provide an answer:
- what consistency level are you using for the test?
- what is your keyspace definition (replication factor most importantly)?
- where are you running your test client (is it a separate box to cassandra)?
- what C* version?
- what are specs (CPU, RAM) of the test servers?

Cheers
Ben

On Thu, 22 Dec 2016 at 09:26 Branislav Janosik -T (bjanosik - AAP3 INC at 
Cisco) <bjano...@cisco.com<mailto:bjano...@cisco.com>> wrote:
Hi all,

I’m working on a project and we have Java benchmark test for testing the 
performance when using Cassandra database. Create operation on a single node 
Cassandra cluster is about 15K operations per second. Problem we have is when I 
set up cluster with 2 or more nodes (each of them are on separate virtual 
machines and servers), the performance goes down to 1K ops/sec. I follow the 
official instructions on how to set up a multinode cluster – the only things I 
change in Cassandra.yaml file are: change seeds to IP address of one node, 
change listen and rpc address to IP address of the node and finally change 
endpoint snitch to GossipingPropertyFileSnitch. The replication factor is set 
to 1 when having 2-node cluster. I use only one datacenter. The cluster seems 
to be doing fine (I can see nodes communicating) and so is the CPU, RAM usage 
on the machines.

Does anybody have any ideas? Any help would be very appreciated.

Thanks!

A must visit exhibition for all Fitness and Sports Freaks. TOI Global Sports 
Business Show from 21 to 23 December 2016 Bombay Exhibition Centre, Mumbai. 
Meet the legends Kaizzad Capadia, Bhaichung Bhutia and more. Join the workshops 
on Boxing & Football and more. www.TOI-GSBS.com


RE: [Cassandra 3.0.9] Cannot allocate memory

2017-03-23 Thread Abhishek Kumar Maheshwari
Thanks Jayesh,

I found the fix for the same.

I make below changes :

In /etc/sysctl.conf I make below change:
vm.max_map_count = 1048575

in the /etc/security/limits.d file:

root - memlock unlimited
root - nofile 10
root - nproc 32768
root - as unlimited


Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Thakrar, Jayesh [mailto:jthak...@conversantmedia.com]
Sent: Thursday, March 23, 2017 8:36 PM
To: Abhishek Kumar Maheshwari <abhishek.maheshw...@timesinternet.in>; 
user@cassandra.apache.org
Subject: Re: [Cassandra 3.0.9] Cannot allocate memory

Dmesg will often print a message saying that it had to kill a process if the 
server was short of memory, so you will have to dump the output to a file and 
check.
If a process is killed to reclaim memory for the system, then it will dump a 
list of all processes and the actual process that was killed.
So maybe, you can check for a kill like this - "dmesg | grep -i kill"
If you do find a line (or two), then you need to examine the output carefully.

In production, I tend to dump a lot of GC output also which helps 
troubleshooting.
E.g. Below is what I have.
If you look, I also have a flag that says that if the heap runs out of memory 
(which is rare), then dump files.
If dmesg does not show your processes being killed, then you may have to dump 
gc logging info to get some insight.

-XX:+UseThreadPriorities
-XX:ThreadPriorityPolicy=42
-Xms16G
-Xmx16G
-Xmn4800M
-XX:+HeapDumpOnOutOfMemoryError
-Xss256k
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled
-XX:+ParallelRefProcEnabled
-XX:+CMSClassUnloadingEnabled
-XX:CMSInitiatingOccupancyFraction=80
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+UseParNewGC
-XX:MaxTenuringThreshold=2
-XX:SurvivorRatio=8
-XX:+UnlockDiagnosticVMOptions
-XX:ParGCCardsPerStrideChunk=32768
-XX:NewSize=750m
-XX:MaxNewSize=750m
-XX:+UseCondCardMark
-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-XX:+PrintGCTimeStamps
-XX:+PrintHeapAtGC
-XX:+PrintTenuringDistribution
-XX:+PrintGCApplicationStoppedTime
-XX:+PrintPromotionFailure
-Xloggc:
-XX:+UseGCLogFileRotation
-XX:NumberOfGCLogFiles=10
-XX:GCLogFileSize=1M
-Djava.net.preferIPv4Stack=true
-Dcom.sun.management.jmxremote.port=7199
-Dcom.sun.management.jmxremote.ssl=<true|false>
-Dcom.sun.management.jmxremote.authenticate=<true|false>




From: Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
Date: Wednesday, March 22, 2017 at 5:18 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: RE: [Cassandra 3.0.9] Cannot allocate memory

JVM config is as below:

-Xms16G
-Xmx16G
-Xmn3000M

What I need to check in dmesg?

From: Thakrar, Jayesh [mailto:jthak...@conversantmedia.com]
Sent: 23 March 2017 03:39
To: Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>;
 user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: RE: [Cassandra 3.0.9] Cannot allocate memory


And what is the configured max heap?
Sometimes you may also be able to see some useful messages in "dmesg" output.

Jayesh


From: Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
Sent: Wednesday, March 22, 2017 5:05:14 PM
To: Thakrar, Jayesh; user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: RE: [Cassandra 3.0.9] Cannot allocate memory

No only Cassandra is running on these servers.

From: Thakrar, Jayesh [mailto:jthak...@conversantmedia.com]
Sent: 22 March 2017 22:27
To: Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>;
 user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: [Cassandra 3.0.9] Cannot allocate memory

Is/are the Cassandra server(s) shared?
E.g. do they run mesos + spark?

From: Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
Date: Wednesday, March 22, 2017 at 12:45 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: [Cassandra 3.0.9] Cannot allocate memory

Hi all,

I am using Cassandra 3.0.9. while I am adding new server after some time I am 
getting below exception. JVM option is attaches.
Hardware info:
Ram 64 GB.
Core: 40


Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7fe9c44ee000, 12288, 0) failed; error='Cannot allocate 
memory

RE: [Cassandra 3.0.9] Cannot allocate memory

2017-03-22 Thread Abhishek Kumar Maheshwari
JVM config is as below:

-Xms16G
-Xmx16G
-Xmn3000M

What I need to check in dmesg?

From: Thakrar, Jayesh [mailto:jthak...@conversantmedia.com]
Sent: 23 March 2017 03:39
To: Abhishek Kumar Maheshwari <abhishek.maheshw...@timesinternet.in>; 
user@cassandra.apache.org
Subject: RE: [Cassandra 3.0.9] Cannot allocate memory


And what is the configured max heap?
Sometimes you may also be able to see some useful messages in "dmesg" output.

Jayesh

____
From: Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
Sent: Wednesday, March 22, 2017 5:05:14 PM
To: Thakrar, Jayesh; user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: RE: [Cassandra 3.0.9] Cannot allocate memory

No only Cassandra is running on these servers.

From: Thakrar, Jayesh [mailto:jthak...@conversantmedia.com]
Sent: 22 March 2017 22:27
To: Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>;
 user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: [Cassandra 3.0.9] Cannot allocate memory

Is/are the Cassandra server(s) shared?
E.g. do they run mesos + spark?

From: Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
Date: Wednesday, March 22, 2017 at 12:45 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: [Cassandra 3.0.9] Cannot allocate memory

Hi all,

I am using Cassandra 3.0.9. while I am adding new server after some time I am 
getting below exception. JVM option is attaches.
Hardware info:
Ram 64 GB.
Core: 40


Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7fe9c44ee000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing 
reserved memory.
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f5c056ab000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
[thread 140033204860672 also had an error]
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f5c0566a000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
[thread 140033204594432 also had an error]Java HotSpot(TM) 64-Bit Server VM 
warning:
INFO: os::commit_memory(0x7fe9c420c000, 12288, 0) failed; error='Cannot 
allocate memory' (errno=12)
Java HotSpot(TM) 64-Bit Server VM warning: [thread 140641994852096 also had an 
error]INFO: os::commit_memory(0x7f5c055a7000, 12288, 0) failed; 
error='Cannot allocate memory' (errno=12)

Please let me know what I miss?

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

YES Bank & The Economic Times Global Business Summit (GBS) is back on 27-28 
March. The 3rd edition of GBS will see participation of 2000+ delegates from 
over 20 countries as they bear witness to leaders sharing insights into how to 
best navigate a dynamic future. Visit www.et-gbs.com<http://www.et-gbs.com>


RE: [Cassandra 3.0.9] Cannot allocate memory

2017-03-22 Thread Abhishek Kumar Maheshwari
The exception is as below:

INFO  17:42:37 Index build of 
til_lineitem_productsku_item_id,til_lineitem_productsku_status complete
WARN  17:43:28 G1 Old Generation GC in 2906ms.  G1 Eden Space: 1560281088 -> 0; 
G1 Old Gen: 3033393144 -> 1127339984; G1 Survivor Space
: 150994944 -> 0;
INFO  17:43:28 Pool NameActive   Pending  Completed   
Blocked  All Time Blocked
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f9f31a55000, 12288, 0) failed; error='Cannot allocate 
memory'
(errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing 
reserved memory.
# An error report file with more information is saved as:
# /opt/apache-cassandra-3.0.9/bin/hs_err_pid8264.log
INFO  17:43:28 MutationStage 1 0   55409726 
0 0

INFO  17:43:28 ViewMutationStage 0 0  0 
0 0

INFO  17:43:28 ReadStage 0 0  0 
0 0

INFO  17:43:28 RequestResponseStage  0 0 17 
0 0

Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f9f830d8000, 65536, 1) failed; error='Cannot allocate 
memory' (errno=12)INFO  17:43:28 ReadRepairStage   0 0  
0 0 0


[thread 140321932023552 also had an error]
[thread 140321811961600 also had an error]
ERROR 17:43:28 Exception in thread Thread[CompactionExecutor:2482,1,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Map failed
at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:156) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.io.util.MmappedRegions$State.add(MmappedRegions.java:280) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.io.util.MmappedRegions$State.access$400(MmappedRegions.java:216)
 ~[apache-cassandra-3.0.9.jar:3.0.9]

From: Abhishek Kumar Maheshwari
Sent: 22 March 2017 11:15
To: user@cassandra.apache.org
Subject: [Cassandra 3.0.9] Cannot allocate memory

Hi all,

I am using Cassandra 3.0.9. while I am adding new server after some time I am 
getting below exception. JVM option is attaches.
Hardware info:
Ram 64 GB.
Core: 40


Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7fe9c44ee000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing 
reserved memory.
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f5c056ab000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
[thread 140033204860672 also had an error]
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f5c0566a000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
[thread 140033204594432 also had an error]Java HotSpot(TM) 64-Bit Server VM 
warning:
INFO: os::commit_memory(0x7fe9c420c000, 12288, 0) failed; error='Cannot 
allocate memory' (errno=12)
Java HotSpot(TM) 64-Bit Server VM warning: [thread 140641994852096 also had an 
error]INFO: os::commit_memory(0x7f5c055a7000, 12288, 0) failed; 
error='Cannot allocate memory' (errno=12)

Please let me know what I miss?

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.



RE: [Cassandra 3.0.9] Cannot allocate memory

2017-03-22 Thread Abhishek Kumar Maheshwari
Hi Abhishek,

In sysctl.conf file we have below setting:

vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2
vm.max_map_count = 1048575

so I need to apply patch for same?

From: Abhishek Verma [mailto:ve...@uber.com]
Sent: 22 March 2017 23:04
To: user@cassandra.apache.org
Cc: Abhishek Kumar Maheshwari <abhishek.maheshw...@timesinternet.in>
Subject: Re: [Cassandra 3.0.9] Cannot allocate memory

Just a shot in the dark, but what is your setting of vm.max_map_count in 
/etc/sysctl.conf ?

It is recommended to set it to:
vm.max_map_count = 1048575

Source: 
https://docs.datastax.com/en/landing_page/doc/landing_page/recommendedSettingsLinux.html

We saw a similar problem in the past where mmap failed and we added a check to 
emit warning as a part of https://issues.apache.org/jira/browse/CASSANDRA-13008.


On Wed, Mar 22, 2017 at 9:57 AM, Thakrar, Jayesh 
<jthak...@conversantmedia.com<mailto:jthak...@conversantmedia.com>> wrote:
Is/are the Cassandra server(s) shared?
E.g. do they run mesos + spark?

From: Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
Date: Wednesday, March 22, 2017 at 12:45 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: [Cassandra 3.0.9] Cannot allocate memory

Hi all,

I am using Cassandra 3.0.9. while I am adding new server after some time I am 
getting below exception. JVM option is attaches.
Hardware info:
Ram 64 GB.
Core: 40


Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7fe9c44ee000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing 
reserved memory.
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f5c056ab000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
[thread 140033204860672 also had an error]
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f5c0566a000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
[thread 140033204594432 also had an error]Java HotSpot(TM) 64-Bit Server VM 
warning:
INFO: os::commit_memory(0x7fe9c420c000, 12288, 0) failed; error='Cannot 
allocate memory' (errno=12)
Java HotSpot(TM) 64-Bit Server VM warning: [thread 140641994852096 also had an 
error]INFO: os::commit_memory(0x7f5c055a7000, 12288, 0) failed; 
error='Cannot allocate memory' (errno=12)

Please let me know what I miss?

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:+91%208%2005591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

YES Bank & The Economic Times Global Business Summit (GBS) is back on 27-28 
March. The 3rd edition of GBS will see participation of 2000+ delegates from 
over 20 countries as they bear witness to leaders sharing insights into how to 
best navigate a dynamic future. Visit www.et-gbs.com<http://www.et-gbs.com>



RE: [Cassandra 3.0.9] Cannot allocate memory

2017-03-22 Thread Abhishek Kumar Maheshwari
No only Cassandra is running on these servers.

From: Thakrar, Jayesh [mailto:jthak...@conversantmedia.com]
Sent: 22 March 2017 22:27
To: Abhishek Kumar Maheshwari <abhishek.maheshw...@timesinternet.in>; 
user@cassandra.apache.org
Subject: Re: [Cassandra 3.0.9] Cannot allocate memory

Is/are the Cassandra server(s) shared?
E.g. do they run mesos + spark?

From: Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
Date: Wednesday, March 22, 2017 at 12:45 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: [Cassandra 3.0.9] Cannot allocate memory

Hi all,

I am using Cassandra 3.0.9. while I am adding new server after some time I am 
getting below exception. JVM option is attaches.
Hardware info:
Ram 64 GB.
Core: 40


Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7fe9c44ee000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing 
reserved memory.
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f5c056ab000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
[thread 140033204860672 also had an error]
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f5c0566a000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
[thread 140033204594432 also had an error]Java HotSpot(TM) 64-Bit Server VM 
warning:
INFO: os::commit_memory(0x7fe9c420c000, 12288, 0) failed; error='Cannot 
allocate memory' (errno=12)
Java HotSpot(TM) 64-Bit Server VM warning: [thread 140641994852096 also had an 
error]INFO: os::commit_memory(0x7f5c055a7000, 12288, 0) failed; 
error='Cannot allocate memory' (errno=12)

Please let me know what I miss?

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

YES Bank & The Economic Times Global Business Summit (GBS) is back on 27-28 
March. The 3rd edition of GBS will see participation of 2000+ delegates from 
over 20 countries as they bear witness to leaders sharing insights into how to 
best navigate a dynamic future. Visit www.et-gbs.com<http://www.et-gbs.com>


[Cassandra 3.0.9] Cannot allocate memory

2017-03-21 Thread Abhishek Kumar Maheshwari
Hi all,

I am using Cassandra 3.0.9. while I am adding new server after some time I am 
getting below exception. JVM option is attaches.
Hardware info:
Ram 64 GB.
Core: 40


Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7fe9c44ee000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing 
reserved memory.
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f5c056ab000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
[thread 140033204860672 also had an error]
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f5c0566a000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
[thread 140033204594432 also had an error]Java HotSpot(TM) 64-Bit Server VM 
warning:
INFO: os::commit_memory(0x7fe9c420c000, 12288, 0) failed; error='Cannot 
allocate memory' (errno=12)
Java HotSpot(TM) 64-Bit Server VM warning: [thread 140641994852096 also had an 
error]INFO: os::commit_memory(0x7f5c055a7000, 12288, 0) failed; 
error='Cannot allocate memory' (errno=12)

Please let me know what I miss?

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

YES Bank & The Economic Times Global Business Summit (GBS) is back on 27-28 
March. The 3rd edition of GBS will see participation of 2000+ delegates from 
over 20 countries as they bear witness to leaders sharing insights into how to 
best navigate a dynamic future. Visit www.et-gbs.com


jvm.options
Description: jvm.options


[Cassandra 3.0.9 ] Disable “delete/Truncate/Drop”

2017-04-04 Thread Abhishek Kumar Maheshwari
Hi all,

There is any way to disable “delete/Truncate/Drop” command on Cassandra?

If yes then how we can implement this?

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

The “Times Cartoonist Hunt” is now your chance to be the next legendary 
cartoonist. Send us 2 original cartoons, one on current affairs and the second 
on any subject of your choice. All entries must be uploaded on 
www.toicartoonisthunt.com by 5th April 2017. Alternatively, you can email your 
entries at toicarto...@gmail.com with your Name, Age, City and Mobile number. 
Gear up, the Hunt has begun!


[Cassandra 3.0.9] In Memory table

2017-04-20 Thread Abhishek Kumar Maheshwari
Hi All,

As Datastax Cassandra version provide a in memory table. Can we achieve same 
thing in apache Cassandra?

http://docs.datastax.com/en/archived/datastax_enterprise/4.6/datastax_enterprise/inMemory.html




Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

Attend LEAP Edtech<http://tlabs.in/edtech>, India's largest EdTech Summit 
focused on forging partnerships between different ecosystem players. Register 
with Discount code LPTBS <https://goo.gl/9RMZtv> to avail 50% discount on event 
tickets.


RE: [Cassandra] nodetool compactionstats not showing pending task.

2017-04-28 Thread Abhishek Kumar Maheshwari
Hi ,

I will try with JMX but I try with tpstats. In tpstats its showing pending 
compaction as 0 but in nodetool compactionstats its showing 3. So, for me its 
seems strange.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Sent: Thursday, April 27, 2017 4:45 PM
To: user@cassandra.apache.org
Subject: Re: [Cassandra] nodetool compactionstats not showing pending task.

Maybe try to monitor through JMX with 
'org.apache.cassandra.db:type=CompactionManager', attribute 'Compactions' or 
'CompactionsSummary'

C*heers
---
Alain Rodriguez - @arodream - 
al...@thelastpickle.com<mailto:al...@thelastpickle.com>
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2017-04-27 12:27 GMT+02:00 Alain RODRIGUEZ 
<arodr...@gmail.com<mailto:arodr...@gmail.com>>:
Hi,

I am not sure about this one. It happened to me in the past as well. I never 
really wondered about it as it was gone after a while or a restart off the top 
of my head. To get rid of it, a restart might be enough.

But if you feel like troubleshooting this, I think the first thing is to try to 
see if compactions are really happening. Maybe using JMX, I believe 
`org.apache.cassandra.metrics:type=Compaction,name=PendingTasks` is what is 
used by 'nodetool compactionstats' but they might be more info there. Actually 
I don't really know what the 'system.compactions_in_progress' was replaced by, 
but any way to double check you could think of would probably help 
understanding better what's happening.

Does someone now the way to check pending compactions details in 3.0.9?

C*heers,
---
Alain Rodriguez - @arodream - 
al...@thelastpickle.com<mailto:al...@thelastpickle.com>
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2017-04-25 15:13 GMT+02:00 Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>:
Hi All,

In Production, I am using Cassandra 3.0.9.

While I am running nodetool compactionstats command its just showing count not 
any other information like below:

[mohit.kundra@AdtechApp bin]$ ./nodetool -h XXX.XX.XX.XX compactionstats
pending tasks: 3
[mohit.kundra@AdtechAppX bin]$

So, this is some Cassandra bug or what? I am not able to understand.


Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:+91%208%2005591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

"Learn journalism at India's largest media house - The Times of India Group. 
Last Date 28 April, 2017. Visit www.tcms.in<http://www.tcms.in> for details."




[Cassandra] nodetool compactionstats not showing pending task.

2017-04-25 Thread Abhishek Kumar Maheshwari
Hi All,

In Production, I am using Cassandra 3.0.9.

While I am running nodetool compactionstats command its just showing count not 
any other information like below:

[mohit.kundra@AdtechApp bin]$ ./nodetool -h XXX.XX.XX.XX compactionstats
pending tasks: 3
[mohit.kundra@AdtechAppX bin]$

So, this is some Cassandra bug or what? I am not able to understand.


Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

"Learn journalism at India's largest media house - The Times of India Group. 
Last Date 28 April, 2017. Visit www.tcms.in for details."


[Cassandra] Ignoring interval time

2017-05-30 Thread Abhishek Kumar Maheshwari
Hi All,

Please let me know why this debug log is coming:

DEBUG [GossipStage:1] 2017-05-30 15:01:31,496 FailureDetector.java:456 - 
Ignoring interval time of 2000686406 for /XXX.XX.XXX.204
DEBUG [GossipStage:1] 2017-05-30 15:01:34,497 FailureDetector.java:456 - 
Ignoring interval time of 2349724693 for /XXX.XX.XXX.207
DEBUG [GossipStage:1] 2017-05-30 15:01:34,497 FailureDetector.java:456 - 
Ignoring interval time of 2000655389 for /XXX.XX.XXX.206
DEBUG [GossipStage:1] 2017-05-30 15:01:34,497 FailureDetector.java:456 - 
Ignoring interval time of 2000721304 for /XXX.XX.XXX.201
DEBUG [GossipStage:1] 2017-05-30 15:01:34,497 FailureDetector.java:456 - 
Ignoring interval time of 2000770809 for /XXX.XX.XXX.202
DEBUG [GossipStage:1] 2017-05-30 15:01:34,497 FailureDetector.java:456 - 
Ignoring interval time of 2000825217 for /XXX.XX.XXX.209
DEBUG [GossipStage:1] 2017-05-30 15:01:35,449 FailureDetector.java:456 - 
Ignoring interval time of 2953167747 for /XXX.XX.XXX.205
DEBUG [GossipStage:1] 2017-05-30 15:01:37,497 FailureDetector.java:456 - 
Ignoring interval time of 2047662469 for /XXX.XX.XXX.205
DEBUG [GossipStage:1] 2017-05-30 15:01:37,497 FailureDetector.java:456 - 
Ignoring interval time of 2000717144 for /XXX.XX.XXX.207
DEBUG [GossipStage:1] 2017-05-30 15:01:37,497 FailureDetector.java:456 - 
Ignoring interval time of 2000780785 for /XXX.XX.XXX.201
DEBUG [GossipStage:1] 2017-05-30 15:01:38,497 FailureDetector.java:456 - 
Ignoring interval time of 2000113606 for /XXX.XX.XXX.209
DEBUG [GossipStage:1] 2017-05-30 15:01:39,121 FailureDetector.java:456 - 
Ignoring interval time of 2334491585 for /XXX.XX.XXX.204
DEBUG [GossipStage:1] 2017-05-30 15:01:39,497 FailureDetector.java:456 - 
Ignoring interval time of 2000209788 for /XXX.XX.XXX.207
DEBUG [GossipStage:1] 2017-05-30 15:01:39,497 FailureDetector.java:456 - 
Ignoring interval time of 2000226568 for /XXX.XX.XXX.208
DEBUG [GossipStage:1] 2017-05-30 15:01:42,178 FailureDetector.java:456 - 
Ignoring interval time of 2390977968 for /XXX.XX.XXX.204

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

[https://tsweb.timesgroup.com/timescape/images/LOGO.jpg]



org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey

2017-05-30 Thread Abhishek Kumar Maheshwari
Hi All,

I am getting below exception in debug.log.

DEBUG [ReadRepairStage:636754] 2017-05-30 14:49:44,259 ReadCallback.java:234 - 
Digest mismatch:
org.apache.cassandra.service.DigestMismatchException: Mismatch for key 
DecoratedKey(4329955402556695061, 000808440801579b425c4000) 
(343b7ef24feb594118ecb4bf7680d07f vs d41d8cd98f00b204e9800998ecf8427e)
at 
org.apache.cassandra.service.DigestResolver.resolve(DigestResolver.java:85) 
~[apache-cassandra-3.0.9.jar:3.0.9]
at 
org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:225)
 ~[apache-cassandra-3.0.9.jar:3.0.9]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]


Please let me know why it's coming?

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

[https://tsweb.timesgroup.com/timescape/images/LOGO.jpg]



RE: [Cassandra] nodetool compactionstats not showing pending task.

2017-05-04 Thread Abhishek Kumar Maheshwari
I just restart the cluster but still facing same issue. Please let me know 
where I can search on JIRA or will raise new ticket for the same?

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591 (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: kurt greaves [mailto:k...@instaclustr.com]
Sent: Tuesday, May 2, 2017 11:30 AM
To: Abhishek Kumar Maheshwari <abhishek.maheshw...@timesinternet.in>
Cc: Alain RODRIGUEZ <arodr...@gmail.com>; user@cassandra.apache.org
Subject: Re: [Cassandra] nodetool compactionstats not showing pending task.

I believe this is a bug with the estimation of tasks, however not aware of any 
JIRA that covers the issue.

On 28 April 2017 at 06:19, Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>
 wrote:
Hi ,

I will try with JMX but I try with tpstats. In tpstats its showing pending 
compaction as 0 but in nodetool compactionstats its showing 3. So, for me its 
seems strange.

Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:+91%208%2005591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

From: Alain RODRIGUEZ [mailto:arodr...@gmail.com<mailto:arodr...@gmail.com>]
Sent: Thursday, April 27, 2017 4:45 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: [Cassandra] nodetool compactionstats not showing pending task.

Maybe try to monitor through JMX with 
'org.apache.cassandra.db:type=CompactionManager', attribute 'Compactions' or 
'CompactionsSummary'

C*heers
---
Alain Rodriguez - @arodream - 
al...@thelastpickle.com<mailto:al...@thelastpickle.com>
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2017-04-27 12:27 GMT+02:00 Alain RODRIGUEZ 
<arodr...@gmail.com<mailto:arodr...@gmail.com>>:
Hi,

I am not sure about this one. It happened to me in the past as well. I never 
really wondered about it as it was gone after a while or a restart off the top 
of my head. To get rid of it, a restart might be enough.

But if you feel like troubleshooting this, I think the first thing is to try to 
see if compactions are really happening. Maybe using JMX, I believe 
`org.apache.cassandra.metrics:type=Compaction,name=PendingTasks` is what is 
used by 'nodetool compactionstats' but they might be more info there. Actually 
I don't really know what the 'system.compactions_in_progress' was replaced by, 
but any way to double check you could think of would probably help 
understanding better what's happening.

Does someone now the way to check pending compactions details in 3.0.9?

C*heers,
---
Alain Rodriguez - @arodream - 
al...@thelastpickle.com<mailto:al...@thelastpickle.com>
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2017-04-25 15:13 GMT+02:00 Abhishek Kumar Maheshwari 
<abhishek.maheshw...@timesinternet.in<mailto:abhishek.maheshw...@timesinternet.in>>:
Hi All,

In Production, I am using Cassandra 3.0.9.

While I am running nodetool compactionstats command its just showing count not 
any other information like below:

[mohit.kundra@AdtechApp bin]$ ./nodetool -h XXX.XX.XX.XX compactionstats
pending tasks: 3
[mohit.kundra@AdtechAppX bin]$

So, this is some Cassandra bug or what? I am not able to understand.


Thanks & Regards,
Abhishek Kumar Maheshwari
+91- 805591<tel:+91%208%2005591> (Mobile)
Times Internet Ltd. | A Times of India Group Company
FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
P Please do not print this email unless it is absolutely necessary. Spread 
environmental awareness.

"Learn journalism at India's largest media house - The Times of India Group. 
Last Date 28 April, 2017. Visit www.tcms.in<http://www.tcms.in> for details."





All Cassandra cleint side thread stuck on session.executequery

2017-12-12 Thread Abhishek Kumar Maheshwari
hi all,

 my all thread get stuck while i am inserting data in one table :

ConvOptJob-Attribution-10" - Thread t@94
   java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <5b510b7b> (a
com.google.common.util.concurrent.AbstractFuture$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:285)
at
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at
com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:137)
at
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:243)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:68)
at
com.toi.stream.service.ItemLineitemPerformanceService.getInsertQueryPerformaceItemLineitemData(ItemLineitemPerformanceService.java:148)
at
com.toi.stream.service.ConversionOptimizationServiceImpl.processForOptimizationInCassandra(ConversionOptimizationServiceImpl.java:197)
at
com.toi.stream.process.ConversionOptimizaionProcessor.process(ConversionOptimizaionProcessor.java:54)
at com.toi.stream.kafka.KafkaGroupConsumer.run(KafkaGroupConsumer.java:159)
at java.lang.Thread.run(Thread.java:748)


-- 

*Thanks & Regards,*
*Abhishek Kumar Maheshwari*
*+91- 805591 (Mobile)*

Times Internet Ltd. | A Times of India Group Company

FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA

*P** Please do not print this email unless it is absolutely necessary.
Spread environmental awareness.*


Driver consistency issue

2018-02-27 Thread Abhishek Kumar Maheshwari
Hi All,

i have a KeySpace in Cassandra (cassandra version 3.0.9- total 12 Servers
)With below definition:

{'DC1': '2', 'class':
'org.apache.cassandra.locator.NetworkTopologyStrategy'}

Some time i am getting below exception

com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra
timeout during write query at consistency QUORUM (3 replica were required
but only 2 acknowledged the write)
at
com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:73)
at
com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:26)
at
com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:68)
at
com.toi.stream.data.AdImprLogDaoImpl.updateImpr(AdImprLogDaoImpl.java:158)
at
com.toi.stream.service.AdClickLogAndAdimprLogServiceImpl.updateGoalsOnImpr(AdClickLogAndAdimprLogServiceImpl.java:522)
at
com.toi.stream.service.ConversionBillingLastAttributionServiceV2Impl.attribute(ConversionBillingLastAttributionServiceV2Impl.java:456)
at
com.toi.stream.service.ConversionBillingLastAttributionServiceV2Impl.attributeAdTracker(ConversionBillingLastAttributionServiceV2Impl.java:228)
at
com.toi.stream.process.AdTrackerStreamProcessorV2.process(AdTrackerStreamProcessorV2.java:86)
at
com.toi.stream.kafka.KafkaGroupConsumer.run(KafkaGroupConsumer.java:175)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException:
Cassandra timeout during write query at consistency QUORUM (3 replica were
required but only 2 acknowledged the write)
at
com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:100)
at
com.datastax.driver.core.Responses$Error.asException(Responses.java:134)
at
com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:525)
at
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1077)

why its waiting for acknowledged from 3rd server as replication factor is 2?


-- 

*Thanks & Regards,*
*Abhishek Kumar Maheshwari*
*+91- 805591 (Mobile)*

Times Internet Ltd. | A Times of India Group Company

FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA

*P** Please do not print this email unless it is absolutely necessary.
Spread environmental awareness.*


Re: Driver consistency issue

2018-02-27 Thread Abhishek Kumar Maheshwari
Hi Alex,

i have only One DC (with name DC1) and have only one keyspace. So i dont
think so both of the scenario is possible. (yes in my case QUORUM is
equivalent
of ALL)

cqlsh> SELECT * FROM system_schema.keyspaces  where keyspace_name='adlog' ;

 keyspace_name | durable_writes | replication
---++---
 adlog |   True | {'DC1': '2', 'class':
'org.apache.cassandra.locator.NetworkTopologyStrategy'}


On Tue, Feb 27, 2018 at 2:27 PM, Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:

> On Tue, Feb 27, 2018 at 9:45 AM, Abhishek Kumar Maheshwari <
> abhishek.maheshw...@timesinternet.in> wrote:
>
>>
>> i have a KeySpace in Cassandra (cassandra version 3.0.9- total 12 Servers
>> )With below definition:
>>
>> {'DC1': '2', 'class': 'org.apache.cassandra.locator.
>> NetworkTopologyStrategy'}
>>
>> Some time i am getting below exception
>>
>> [snip]
>
>> Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException:
>> Cassandra timeout during write query at consistency QUORUM (3 replica were
>> required but only 2 acknowledged the write)
>> at com.datastax.driver.core.exceptions.WriteTimeoutException.co
>> py(WriteTimeoutException.java:100)
>> at com.datastax.driver.core.Responses$Error.asException(Respons
>> es.java:134)
>> at com.datastax.driver.core.RequestHandler$SpeculativeExecution
>> .onSet(RequestHandler.java:525)
>> at com.datastax.driver.core.Connection$Dispatcher.channelRead0(
>> Connection.java:1077)
>>
>> why its waiting for acknowledged from 3rd server as replication factor
>> is 2?
>>
>
> I see two possibilities:
>
> 1) The data in this keyspace is replicated to another DC, so there is also
> 'DC2': '2', for example, but you didn't show it.  In this case QUORUM
> requires more than 2 nodes.
> 2) The write was targeting a table in a different keyspace than you think.
>
> In any case QUORUM (or LOCAL_QUORUM) with RF=2 is equivalent of ALL.  Not
> sure why would you use it in the first place.
>
> For consistency levels involving quorum you want to go with RF=3 in a
> single DC.  For multi DC you should think if you want QUORUM or EACH_QUORUM
> for your writes and figure out the RFs from that.
>
> Cheers,
> --
> Alex
>
>


-- 

*Thanks & Regards,*
*Abhishek Kumar Maheshwari*
*+91- 805591 (Mobile)*

Times Internet Ltd. | A Times of India Group Company

FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA

*P** Please do not print this email unless it is absolutely necessary.
Spread environmental awareness.*


Re: Driver consistency issue

2018-02-27 Thread Abhishek Kumar Maheshwari
Hi,

Not always. Randomly i am getting this exception. (one observation, mostly
i got this exception when i add new node in cluster.)

On Tue, Feb 27, 2018 at 4:29 PM, Nicolas Guyomar <nicolas.guyo...@gmail.com>
wrote:

> Hi,
>
> Adding the java-driver ML for further question, because this does look
> like a bug
>
> Are you able to reproduce it a clean environnement using the same C*
> version and driver version ?
>
>
> On 27 February 2018 at 10:05, Abhishek Kumar Maheshwari <
> abhishek.maheshw...@timesinternet.in> wrote:
>
>> Hi Alex,
>>
>> i have only One DC (with name DC1) and have only one keyspace. So i dont
>> think so both of the scenario is possible. (yes in my case QUORUM is  
>> equivalent
>> of ALL)
>>
>> cqlsh> SELECT * FROM system_schema.keyspaces  where keyspace_name='adlog'
>> ;
>>
>>  keyspace_name | durable_writes | replication
>> ---++---
>> 
>>  adlog |   True | {'DC1': '2', 'class':
>> 'org.apache.cassandra.locator.NetworkTopologyStrategy'}
>>
>>
>> On Tue, Feb 27, 2018 at 2:27 PM, Oleksandr Shulgin <
>> oleksandr.shul...@zalando.de> wrote:
>>
>>> On Tue, Feb 27, 2018 at 9:45 AM, Abhishek Kumar Maheshwari <
>>> abhishek.maheshw...@timesinternet.in> wrote:
>>>
>>>>
>>>> i have a KeySpace in Cassandra (cassandra version 3.0.9- total 12
>>>> Servers )With below definition:
>>>>
>>>> {'DC1': '2', 'class': 'org.apache.cassandra.locator.
>>>> NetworkTopologyStrategy'}
>>>>
>>>> Some time i am getting below exception
>>>>
>>>> [snip]
>>>
>>>> Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException:
>>>> Cassandra timeout during write query at consistency QUORUM (3 replica were
>>>> required but only 2 acknowledged the write)
>>>> at com.datastax.driver.core.exceptions.WriteTimeoutException.co
>>>> py(WriteTimeoutException.java:100)
>>>> at com.datastax.driver.core.Responses$Error.asException(Respons
>>>> es.java:134)
>>>> at com.datastax.driver.core.RequestHandler$SpeculativeExecution
>>>> .onSet(RequestHandler.java:525)
>>>> at com.datastax.driver.core.Connection$Dispatcher.channelRead0(
>>>> Connection.java:1077)
>>>>
>>>> why its waiting for acknowledged from 3rd server as replication factor
>>>> is 2?
>>>>
>>>
>>> I see two possibilities:
>>>
>>> 1) The data in this keyspace is replicated to another DC, so there is
>>> also 'DC2': '2', for example, but you didn't show it.  In this case QUORUM
>>> requires more than 2 nodes.
>>> 2) The write was targeting a table in a different keyspace than you
>>> think.
>>>
>>> In any case QUORUM (or LOCAL_QUORUM) with RF=2 is equivalent of ALL.
>>> Not sure why would you use it in the first place.
>>>
>>> For consistency levels involving quorum you want to go with RF=3 in a
>>> single DC.  For multi DC you should think if you want QUORUM or EACH_QUORUM
>>> for your writes and figure out the RFs from that.
>>>
>>> Cheers,
>>> --
>>> Alex
>>>
>>>
>>
>>
>> --
>>
>> *Thanks & Regards,*
>> *Abhishek Kumar Maheshwari*
>> *+91- 805591 <+91%208%2005591> (Mobile)*
>>
>> Times Internet Ltd. | A Times of India Group Company
>>
>> FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA
>>
>> *P** Please do not print this email unless it is absolutely necessary.
>> Spread environmental awareness.*
>>
>
>


-- 

*Thanks & Regards,*
*Abhishek Kumar Maheshwari*
*+91- 805591 (Mobile)*

Times Internet Ltd. | A Times of India Group Company

FC - 6, Sector 16A, Film City,  Noida,  U.P. 201301 | INDIA

*P** Please do not print this email unless it is absolutely necessary.
Spread environmental awareness.*