Re: failure node rejoin

2016-11-23 Thread Ben Slater
You could certainly log a JIRA for the “failure node rejoin” issue ( https://issues.apache.org/*jira*/browse/ *cassandra ). I*t sounds like unexpected behaviour to me. However, I’m not sure it will be viewed a

Re: failure node rejoin

2016-11-23 Thread Yuji Ito
Hi Ben, I continue to investigate the data loss issue. I'm investigating logs and source code and try to reproduce the data loss issue with a simple test. I also try my destructive test with DROP instead of TRUNCATE. BTW, I want to discuss the issue of the title "failure node rejoin" again.

Bulk Import Question

2016-11-23 Thread Joe Olson
I'm following the Cassandra bulk import example here: https://github.com/yukim/cassandra-bulkload-example Are the Cassandra data types inet, smallint, and tinyint supported by the bulk import CQLSSTableWriter ? I can't seem to get them to work...

Re: Row and column level tombstones

2016-11-23 Thread Vladimir Yudovin
You are right, only new inserts after delete are taken into account: CREATE KEYSPACE ks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; CREATE TABLE ks.tb (id int PRIMARY KEY , str text); INSERT INTO ks.tb (id, str) VALUES ( 0,''); DELETE from ks.tb WHERE id =0;

Row and column level tombstones

2016-11-23 Thread Andrew Cooper
What would be returned in the following example? Row with columns exists Row is deleted (row tombstone) Row key is recreated Would columns that existed before the row delete/tombstone show back up in a read if the row key is recreated? My assumption is that the row key tombstone timestamp is

Re: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Manoj Khangaonkar
Hi, What is your write consistency setting ? regards On Wed, Nov 23, 2016 at 3:48 AM, Vladimir Yudovin wrote: > Try to build cluster with *.withPoolingOptions* > > Best regards, Vladimir Yudovin, > *Winguzone - Cloud Cassandra Hosting* >

Re: Reading Commit log files

2016-11-23 Thread Kamesh
Hi Carlos, durable_writes = true. *cqlsh:test> describe test;* * CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = true;* Thanks & Regards Kamesh. On Wed, Nov 23, 2016 at 9:10 PM, Carlos Alonso wrote: >

Re: Reading Commit log files

2016-11-23 Thread Carlos Alonso
Did you configured your keyspace with durable_writes = false by any chance? That would make operations not reach the commitlog. On Wed, 23 Nov 2016 at 13:06 Kamesh wrote: > Hi Carlos, > Thanks for your response. > I performed few insert statements and run my application

Re: Reading Commit log files

2016-11-23 Thread Kamesh
Hi Carlos, Thanks for your response. I performed few insert statements and run my application without flushing. Still not able to read the commit logs. However, I am able to read the commit logs of *system* and *system_schema* key spaces but not able to read the application key space (key

Re: Reading Commit log files

2016-11-23 Thread Carlos Alonso
Hi Kamesh. Flushing memtables to disk causes the corresponding commitlog segments to be deleted. Once the data is flushed into SSTables it can be considered durable (in case of a node crash, the data won't be lost), and therefore there's no point in keeping it in the commitlog as well. Try

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Vladimir Yudovin
Try to build cluster with .withPoolingOptions Best regards, Vladimir Yudovin, Winguzone - Cloud Cassandra Hosting On Wed, 23 Nov 2016 05:57:58 -0500Abhishek Kumar Maheshwari abhishek.maheshw...@timesinternet.in wrote Yes, i also try with async mode but I got max speed on

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
Yes, i also try with async mode but I got max speed on 2500 request/sec per server. ExecutorService service=Executors.newFixedThreadPool(1000); for(final AdLog adLog:li){ service.submit(()->{

Re: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Benjamin Roth
This has nothing to do with sync/async operations. An async operation is also replayable. You receive the result in a future instead. Have you ever dealt with async programming techniques like promises, futures, callbacks? Async programming does not change the fact that you get a result of your

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
But I need to do it in sync mode as per business requirement. If something went wrong then it should be replayle. That’s why I am using sync mode. Thanks & Regards, Abhishek Kumar Maheshwari +91- 805591 (Mobile) Times Internet Ltd. | A Times of India Group Company FC - 6, Sector 16A, Film

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Vladimir Yudovin
session.execute is coming from Session session = cluster.connect(); I guess? So actually all threads work with the same TCP connection. It's worth to try async API with Connection Pool. Best regards, Vladimir Yudovin, Winguzone - Cloud Cassandra Hosting On Wed, 23 Nov 2016

Reading Commit log files

2016-11-23 Thread Kamesh
Hi All, I am trying to read cassandra commit log files, but unable to do it. I am experimenting this with 1 node cluster(laptop) Cassandra Version : *3.8* Updated cassadra.yaml with *cdc_enabled: true* After executing the below statments and flushing memtables, tried reading commit log

Re: data not replicated on new node

2016-11-23 Thread Malte Pickhan
Not sure if it's really related, but we experienced something similar last friday. I summarized it in the following Issue: https://issues.apache.org/jira/browse/CASSANDRA-12947 Best, Malte 2016-11-23 10:21 GMT+01:00 Oleksandr Shulgin : > On Tue, Nov 22, 2016 at

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
Hi I am submitting record to Executor service and below is my client config and code: cluster = Cluster.builder().addContactPoints(hostAddresses) .withRetryPolicy(DefaultRetryPolicy.INSTANCE) .withReconnectionPolicy(new ConstantReconnectionPolicy(3L))

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Vladimir Yudovin
I have a list with 1cr record. I am just iterating on it and executing the query. Also, I try with 200 thread Do you fetch each list item and put it to separate thread to perform CQL query? Also how exactly do you connect to Cassandra? If you use synchronous API so it's better to create

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
Hi Siddharth, For me it seems Cassandra side. Because I have a list with 1cr record. I am just iterating on it and executing the query. Also, I try with 200 thread but still speed doesn’t increase that much as expected. On grafana write latency is near about 10Ms. Thanks & Regards, Abhishek

Re: data not replicated on new node

2016-11-23 Thread Oleksandr Shulgin
On Tue, Nov 22, 2016 at 5:23 PM, Bertrand Brelier < bertrand.brel...@gmail.com> wrote: > Hello Shalom. > > No I really went from 3.1.1 to 3.0.9 . > So you've just installed the 3.0.9 version and re-started with it? I wonder if it's really supported? Regards, -- Alex

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
Hi Vladimir, I try the same but it doesn’t increase. also in grafan average write latency is near about 10Ms. Thanks & Regards, Abhishek Kumar Maheshwari +91- 805591 (Mobile) Times Internet Ltd. | A Times of India Group Company FC - 6, Sector 16A, Film City, Noida, U.P. 201301 | INDIA P

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Vladimir Yudovin
So do you see speed write saturation at this number of thread? Does doubling to 200 bring increase? Best regards, Vladimir Yudovin, Winguzone - Cloud Cassandra Hosting, Zero production time On Wed, 23 Nov 2016 03:31:32 -0500Abhishek Kumar Maheshwari

Re: How to Choose a Version for Upgrade

2016-11-23 Thread Shalom Sagges
Thanks Vladimir! Shalom Sagges DBA T: +972-74-700-4035 We Create Meaningful Connections

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
No I am using 100 threads. Thanks & Regards, Abhishek Kumar Maheshwari +91- 805591 (Mobile) Times Internet Ltd. | A Times of India Group Company FC - 6, Sector 16A, Film City, Noida, U.P. 201301 | INDIA P Please do not print this email unless it is absolutely necessary. Spread

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Vladimir Yudovin
I have 1Cr records in my Java ArrayList and yes I am writing in sync mode. Is your Java program single threaded? Best regards, Vladimir Yudovin, Winguzone - Cloud Cassandra Hosting, Zero production time On Wed, 23 Nov 2016 03:09:29 -0500Abhishek Kumar Maheshwari

Re: How to Choose a Version for Upgrade

2016-11-23 Thread Vladimir Yudovin
Hi Shalom, there are a lot of discussion on this topic, but it seems that for know we can call 3.0.xx line as most stable. If you don't need specific feature from 3.x line take 3.0.10. Best regards, Vladimir Yudovin, Winguzone - Cloud Cassandra Hosting On Wed, 23 Nov 2016

How to Choose a Version for Upgrade

2016-11-23 Thread Shalom Sagges
Hi Everyone, I was wondering how to choose the proper, most stable Cassandra version for a Production environment. Should I follow the version that's used in Datastax Enterprise (in this case 3.0.10) or is there a better way of figuring this out? Thanks! Shalom Sagges DBA T: +972-74-700-4035

Re: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Benjamin Roth
There is cassandra-stress to benchmark your cluster. See docs here: https://docs.datastax.com/en/cassandra/3.x/cassandra/tools/toolsCStress.html?hl=stress 2016-11-23 9:09 GMT+01:00 Abhishek Kumar Maheshwari < abhishek.maheshw...@timesinternet.in>: > Hi Benjamin, > > > > I have 1Cr records in my

RE: Cassandra Config as per server hardware for heavy write

2016-11-23 Thread Abhishek Kumar Maheshwari
Hi Benjamin, I have 1Cr records in my Java ArrayList and yes I am writing in sync mode. My table is as below: CREATE TABLE XXX_YY_MMS ( date timestamp, userid text, time timestamp, xid text, addimid text, advcid bigint, algo bigint, alla text, aud text,