RE: Throughput Vs Latency

2014-12-25 Thread Job Thomas
 
Even if you are connecting the client to one running node, it will just act as 
co-coordinator , the read write/read will ends in the actual node where the 
data exists.
 
Bu multithread, I mean non serialized read/write (read or write , one after 
another). If this is the case then surely ur equation is right. ( throughput is 
proportional to latency)
 
Thanks & Regards
Job M Thomas 
Platform & Technology
Mob : 7560885748



From: Ajay [mailto:ajay.ga...@gmail.com]
Sent: Fri 12/26/2014 12:52 PM
To: user@cassandra.apache.org
Subject: Re: Throughput Vs Latency


Hi Thomas,


I am little confused when you say multithreaded client. Actually we don't 
explicitly invoke read on multiple servers (for replicated data) from the 
client code. So how does multithreaded client fix this?


Thanks

Ajay



On Fri, Dec 26, 2014 at 12:08 PM, Job Thomas  wrote:


Hi Ajay,

My understanding is this,If you have a cluster of 3 nodes with 
replication factor of 3 , then the latency has more roll in throughput.

It the cluster size is 6 with replication factor or 3 and if  you are 
using multithreaded client, then the latency remain same and you will get 
better throughput.(Not because of 6 node but because of 6 nodes and multiple 
threads).

Thanks & Regards
Job M Thomas
Platform & Technology
Mob : 7560885748



From: Ajay [mailto:ajay.ga...@gmail.com]
Sent: Fri 12/26/2014 11:57 AM
To: user@cassandra.apache.org
Subject: Re: Throughput Vs Latency



Thanks Thomas for the clarification.


If I use the Consistency level of QUORUM for Read and Write, the 
Latency would affect the Throughput right?


Thanks

Ajay


On Fri, Dec 26, 2014 at 11:15 AM, Job Thomas  
wrote:


Hi,

First of all,the write latency of cassandra is not high(Read is 
high).

The high throughput is achieved through distributes read and 
write.

Your doubt ( If Latency is more how come the Throughput is high 
) is some what right if you put high consistency to both read and write.

You will get distributed abilities since it is not Master/Slave 
architecture(Like HBase).

 If  your consistency is lesser,then some nodes out of all 
replica nodes are free and will be used for another read/write . [ Think you 
are using multithreaded
application ]

Thanks & Regards
Job M Thomas
Platform & Technology
Mob : 7560885748



From: Ajay [mailto:ajay.ga...@gmail.com]
Sent: Fri 12/26/2014 10:46 AM
To: user@cassandra.apache.org
Subject: Throughput Vs Latency


Hi,


I am new to No SQL (and Cassandra). As I am going through few 
articles on Cassandra, it says Cassandra achieves highest throughput among 
various No SQL solutions but at the cost of high  read and write latency. I 
have a basic question here - (If my understanding is right) Latency means the 
time taken to accept input, process and respond back. If Latency is more how 
come the Throughput is high?


Thanks

Ajay






<>

Re: Throughput Vs Latency

2014-12-25 Thread Ajay
Hi Thomas,

I am little confused when you say multithreaded client. Actually we don't
explicitly invoke read on multiple servers (for replicated data) from the
client code. So how does multithreaded client fix this?

Thanks
Ajay


On Fri, Dec 26, 2014 at 12:08 PM, Job Thomas  wrote:

> Hi Ajay,
>
> My understanding is this,If you have a cluster of 3 nodes with replication
> factor of 3 , then the latency has more roll in throughput.
>
> It the cluster size is 6 with replication factor or 3 and if  you are
> using multithreaded client, then the latency remain same and you will get
> better throughput.(Not because of 6 node but because of 6 nodes and
> multiple threads).
>
> Thanks & Regards
> Job M Thomas
> Platform & Technology
> Mob : 7560885748
>
> 
>
> From: Ajay [mailto:ajay.ga...@gmail.com]
> Sent: Fri 12/26/2014 11:57 AM
> To: user@cassandra.apache.org
> Subject: Re: Throughput Vs Latency
>
>
> Thanks Thomas for the clarification.
>
>
> If I use the Consistency level of QUORUM for Read and Write, the Latency
> would affect the Throughput right?
>
>
> Thanks
>
> Ajay
>
>
> On Fri, Dec 26, 2014 at 11:15 AM, Job Thomas  wrote:
>
>
> Hi,
>
> First of all,the write latency of cassandra is not high(Read is
> high).
>
> The high throughput is achieved through distributes read and write.
>
> Your doubt ( If Latency is more how come the Throughput is high )
> is some what right if you put high consistency to both read and write.
>
> You will get distributed abilities since it is not Master/Slave
> architecture(Like HBase).
>
>  If  your consistency is lesser,then some nodes out of all replica
> nodes are free and will be used for another read/write . [ Think you are
> using multithreaded
> application ]
>
> Thanks & Regards
> Job M Thomas
> Platform & Technology
> Mob : 7560885748
>
> 
>
> From: Ajay [mailto:ajay.ga...@gmail.com]
> Sent: Fri 12/26/2014 10:46 AM
> To: user@cassandra.apache.org
> Subject: Throughput Vs Latency
>
>
> Hi,
>
>
> I am new to No SQL (and Cassandra). As I am going through few
> articles on Cassandra, it says Cassandra achieves highest throughput among
> various No SQL solutions but at the cost of high  read and write latency. I
> have a basic question here - (If my understanding is right) Latency means
> the time taken to accept input, process and respond back. If Latency is
> more how come the Throughput is high?
>
>
> Thanks
>
> Ajay
>
>
>
>


RE: Throughput Vs Latency

2014-12-25 Thread Job Thomas
Hi Ajay,
 
My understanding is this,If you have a cluster of 3 nodes with replication 
factor of 3 , then the latency has more roll in throughput.
 
It the cluster size is 6 with replication factor or 3 and if  you are using 
multithreaded client, then the latency remain same and you will get better 
throughput.(Not because of 6 node but because of 6 nodes and multiple threads).
 
Thanks & Regards
Job M Thomas 
Platform & Technology
Mob : 7560885748



From: Ajay [mailto:ajay.ga...@gmail.com]
Sent: Fri 12/26/2014 11:57 AM
To: user@cassandra.apache.org
Subject: Re: Throughput Vs Latency


Thanks Thomas for the clarification.


If I use the Consistency level of QUORUM for Read and Write, the Latency would 
affect the Throughput right?


Thanks

Ajay


On Fri, Dec 26, 2014 at 11:15 AM, Job Thomas  wrote:


Hi,
 
First of all,the write latency of cassandra is not high(Read is high).
 
The high throughput is achieved through distributes read and write. 
 
Your doubt ( If Latency is more how come the Throughput is high ) is 
some what right if you put high consistency to both read and write.
 
You will get distributed abilities since it is not Master/Slave 
architecture(Like HBase).
 
 If  your consistency is lesser,then some nodes out of all replica 
nodes are free and will be used for another read/write . [ Think you are using 
multithreaded 
application ]
 
Thanks & Regards
Job M Thomas 
Platform & Technology
Mob : 7560885748



From: Ajay [mailto:ajay.ga...@gmail.com]
Sent: Fri 12/26/2014 10:46 AM
To: user@cassandra.apache.org
Subject: Throughput Vs Latency


Hi,


I am new to No SQL (and Cassandra). As I am going through few articles 
on Cassandra, it says Cassandra achieves highest throughput among various No 
SQL solutions but at the cost of high  read and write latency. I have a basic 
question here - (If my understanding is right) Latency means the time taken to 
accept input, process and respond back. If Latency is more how come the 
Throughput is high?


Thanks

Ajay



<>

Re: Throughput Vs Latency

2014-12-25 Thread Ajay
Thanks Thomas for the clarification.

If I use the Consistency level of QUORUM for Read and Write, the Latency
would affect the Throughput right?

Thanks
Ajay

On Fri, Dec 26, 2014 at 11:15 AM, Job Thomas  wrote:

>  Hi,
>
> First of all,the write latency of cassandra is not high(Read is high).
>
> The high throughput is achieved through distributes read and write.
>
> Your doubt ( If Latency is more how come the Throughput is high ) is some
> what right if you put high consistency to both read and write.
>
> You will get distributed abilities since it is not Master/Slave
> architecture(Like HBase).
>
>  If  your consistency is lesser,then some nodes out of all replica nodes
> are free and will be used for another read/write . [ Think you are using
> multithreaded
> application ]
>
>  Thanks & Regards
> Job M Thomas
> Platform & Technology
> Mob : 7560885748
>
> --
> *From:* Ajay [mailto:ajay.ga...@gmail.com]
> *Sent:* Fri 12/26/2014 10:46 AM
> *To:* user@cassandra.apache.org
> *Subject:* Throughput Vs Latency
>
>   Hi,
>
> I am new to No SQL (and Cassandra). As I am going through few articles on
> Cassandra, it says Cassandra achieves highest throughput among various No
> SQL solutions but at the cost of high  read and write latency. I have a
> basic question here - (If my understanding is right) Latency means the time
> taken to accept input, process and respond back. If Latency is more how
> come the Throughput is high?
>
> Thanks
> Ajay
>


RE: Throughput Vs Latency

2014-12-25 Thread Job Thomas
Hi,
 
First of all,the write latency of cassandra is not high(Read is high).
 
The high throughput is achieved through distributes read and write. 
 
Your doubt ( If Latency is more how come the Throughput is high ) is some what 
right if you put high consistency to both read and write.
 
You will get distributed abilities since it is not Master/Slave 
architecture(Like HBase).
 
 If  your consistency is lesser,then some nodes out of all replica nodes are 
free and will be used for another read/write . [ Think you are using 
multithreaded 
application ]
 
Thanks & Regards
Job M Thomas 
Platform & Technology
Mob : 7560885748



From: Ajay [mailto:ajay.ga...@gmail.com]
Sent: Fri 12/26/2014 10:46 AM
To: user@cassandra.apache.org
Subject: Throughput Vs Latency


Hi,


I am new to No SQL (and Cassandra). As I am going through few articles on 
Cassandra, it says Cassandra achieves highest throughput among various No SQL 
solutions but at the cost of high  read and write latency. I have a basic 
question here - (If my understanding is right) Latency means the time taken to 
accept input, process and respond back. If Latency is more how come the 
Throughput is high?


Thanks

Ajay



Throughput Vs Latency

2014-12-25 Thread Ajay
Hi,

I am new to No SQL (and Cassandra). As I am going through few articles on
Cassandra, it says Cassandra achieves highest throughput among various No
SQL solutions but at the cost of high  read and write latency. I have a
basic question here - (If my understanding is right) Latency means the time
taken to accept input, process and respond back. If Latency is more how
come the Throughput is high?

Thanks
Ajay


转发:Re: Cassandra update row after delete immediately, and read that, the data not right?

2014-12-25 Thread yhqruc
Hi, all:   The test program first insert one row and then delete it, then read 
it to compare.   The test program run this flow row by row, not batch.
Today I found the problem is caused by the deletion timestamp. The machine 
running the test program may not be time sync with cassandra machine strictly.  
   
 Why cassandra use the local timestamp for deletion??


发件人:Jack Krupansky 
收件人:user@cassandra.apache.org, yhq...@sina.com
主题:Re: Cassandra update row after delete immediately, and read that, the data 
not right?
日期:2014年12月25日 21点04分

What RF?
Is the update and read immediately after the delete and insert, or is the read 
after doing all the updates?
Is the delete and insert done with a single batch?-- Jack Krupansky

On Thu, Dec 25, 2014 at 4:14 AM,   wrote:
Hi, all
  I write a program to test the cassandra2.1. I have 6 nodes cluster.
  First, I insert 1 million row data into cassandra. the row key from 1 to 
100.
  Then I run my test program. My test program first delete(use batch mutate) 
the row and insert (use batch mutate) that row, 
 then read (use gen_slice_range) the same row. After that check whether the 
read data is same with the insert data or not.  The consistency level used is 
quorum. 
  I found there some cases that not the same. About 1/1. In this error 
cases, some column is not same.  Then I use cassandra-cli to check the data, 
found that column is not exist. It seems insert partly.
  My test program has 20 threads. the QPS 800 about

  What's wrong with cassandra?? 
  

Thanks! 





Re: Sqoop Free Form Import Query Breaks off

2014-12-25 Thread Abraham Elmahrek
Glad it's working.

I'm a bit concerned that it didn't work. So just for future reference,
removing quotes might be necessary as well.

-Abe

On Thu, Dec 25, 2014 at 12:39 PM, Vineet Mishra 
wrote:

> Hi Abe,
>
> Thanks for your quick suggestion, even I already tried this as well and
> unfortunately this too didn't worked for my case.
>
> At last I got a work around to the problem, I got a understanding that
> since its was working fine given the same command through command terminal
> and was breaking when I was invoking it through Java Process exec(),(since
> splitting around spaces and executing script on top of that) so I just
> dumped the piece of code to a xyz.sh file and finally executed that file
> itself.
>
> It worked like a charm.
>
> Anyways thanks Abe for your effective and quickest response.
>
> __
> Apologies to the cassandra group, for accidentally posting sqoop question
> in cassandra mailing list.
>
> On Fri, Dec 26, 2014 at 12:53 AM, Abraham Elmahrek 
> wrote:
>
>> Seems like exec parses the command with StringTokenizer [1]. This may
>> have some difficulty parsing with quotes. Try using the array form of this
>> command to explicitly define your String tokens [2].
>>
>> Ref:
>> 1.
>> http://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html#exec(java.lang.String,%20java.lang.String[],%20java.io.File)
>> 2.
>> http://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html#exec(java.lang.String[],%20java.lang.String[],%20java.io.File)
>>
>> -Abe
>>
>> On Thu, Dec 25, 2014 at 5:24 AM, Vineet Mishra 
>> wrote:
>>
>>> exec
>>
>>
>>
>>
>


Re: Sqoop Free Form Import Query Breaks off

2014-12-25 Thread Vineet Mishra
Hi Abe,

Thanks for your quick suggestion, even I already tried this as well and
unfortunately this too didn't worked for my case.

At last I got a work around to the problem, I got a understanding that
since its was working fine given the same command through command terminal
and was breaking when I was invoking it through Java Process exec(),(since
splitting around spaces and executing script on top of that) so I just
dumped the piece of code to a xyz.sh file and finally executed that file
itself.

It worked like a charm.

Anyways thanks Abe for your effective and quickest response.
__
Apologies to the cassandra group, for accidentally posting sqoop question
in cassandra mailing list.

On Fri, Dec 26, 2014 at 12:53 AM, Abraham Elmahrek  wrote:

> Seems like exec parses the command with StringTokenizer [1]. This may have
> some difficulty parsing with quotes. Try using the array form of this
> command to explicitly define your String tokens [2].
>
> Ref:
> 1.
> http://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html#exec(java.lang.String,%20java.lang.String[],%20java.io.File)
> 2.
> http://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html#exec(java.lang.String[],%20java.lang.String[],%20java.io.File)
>
> -Abe
>
> On Thu, Dec 25, 2014 at 5:24 AM, Vineet Mishra 
> wrote:
>
>> exec
>
>
>
>


Re: Sqoop Free Form Import Query Breaks off

2014-12-25 Thread Abraham Elmahrek
Seems like exec parses the command with StringTokenizer [1]. This may have
some difficulty parsing with quotes. Try using the array form of this
command to explicitly define your String tokens [2].

Ref:
1.
http://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html#exec(java.lang.String,%20java.lang.String[],%20java.io.File)
2.
http://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html#exec(java.lang.String[],%20java.lang.String[],%20java.io.File)

-Abe

On Thu, Dec 25, 2014 at 5:24 AM, Vineet Mishra 
wrote:

> exec


Re: Sqoop Free Form Import Query Breaks off

2014-12-25 Thread Vineet Mishra
Hey regret for the same, as you can see this is a urgent issue I am stuck
at and in a hurry instead of including cloudera mailing list eventually I
ended up in adding cassandra.

Will take care of it in future and thanks for your mail. :)


On Thu, Dec 25, 2014 at 9:01 PM, Jens Rantil  wrote:

> Hi,
>
> Does this have anything to do with Cassandra? Also, please try to avoid
> cross posting; It makes it hard for
> - future readers to read the full thread.
> - anyone to follow the full thread.
> - anyone to respond. I assume there are few who are enrolled to both
> mailing lists at the same time.
>
> Thank you and merry Christmas,
> Jens
>
>
>
> On Thu, Dec 25, 2014 at 2:24 PM, Vineet Mishra 
> wrote:
>
>> Hi All,
>>
>> I am facing a issue while Sqoop(Sqoop version: 1.4.3-cdh4.7.0) Import, I
>> am having a Java threaded code to import data from multiple databases
>> running at different servers.
>>
>> Currently I am doing a Java Process Execute something like to execute
>> sqoop job,
>>
>> Runtime.getRuntime().exec("/usr/bin/sqoop import --driver
>> com.vertica.jdbc.Driver  --connect jdbc:vertica://host:port/db  --username
>> user --password pwd --query 'select * from table WHERE $CONDITIONS'
>> --split-by id --target-dir /user/hive/warehouse/data/db.db/table
>> --fields-terminated-by '\t' --hive-drop-import-delims -m 1")
>>
>> I am executing the above command as it is and running into exception
>> saying,
>>
>>  WARN tool.BaseSqoopTool: Setting your password on the command-line is
>> insecure. Consider using -P instead.
>> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Error parsing arguments for
>> import:
>> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: *
>> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: from
>> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: table
>>  14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: WHERE
>> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument:
>> $CONDITIONS'
>> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument:
>> --split-by
>> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: id
>>  .
>> .
>> .
>> Although I can easily understand the error and reason of the fact as
>> sqoop is internally splitting the command by space and taking the KV which
>> is splitting the free form query as otherwise its runs fine with the table
>> parameter instead, but if I run the same command directly from the command
>> line it works like a charm.
>>
>> wanted to know is there's something that I am missing while going this
>> way,
>>
>> If no,
>>
>> then why is this issue hitting and what's the work around?
>>
>> Urgent Call!
>>
>> Thanks!
>>
>>
>>
>


Re: Sqoop Free Form Import Query Breaks off

2014-12-25 Thread Jens Rantil
Hi,

Does this have anything to do with Cassandra? Also, please try to avoid cross 
posting; It makes it hard for
- future readers to read the full thread.
- anyone to follow the full thread.
- anyone to respond. I assume there are few who are enrolled to both mailing 
lists at the same time.

Thank you and merry Christmas,
Jens

On Thu, Dec 25, 2014 at 2:24 PM, Vineet Mishra 
wrote:

> Hi All,
> I am facing a issue while Sqoop(Sqoop version: 1.4.3-cdh4.7.0) Import, I am
> having a Java threaded code to import data from multiple databases running
> at different servers.
> Currently I am doing a Java Process Execute something like to execute sqoop
> job,
> Runtime.getRuntime().exec("/usr/bin/sqoop import --driver
> com.vertica.jdbc.Driver  --connect jdbc:vertica://host:port/db  --username
> user --password pwd --query 'select * from table WHERE $CONDITIONS'
> --split-by id --target-dir /user/hive/warehouse/data/db.db/table
> --fields-terminated-by '\t' --hive-drop-import-delims -m 1")
> I am executing the above command as it is and running into exception saying,
> WARN tool.BaseSqoopTool: Setting your password on the command-line is
> insecure. Consider using -P instead.
> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Error parsing arguments for
> import:
> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: *
> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: from
> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: table
> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: WHERE
> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument:
> $CONDITIONS'
> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument:
> --split-by
> 14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: id
> .
> .
> .
> Although I can easily understand the error and reason of the fact as sqoop
> is internally splitting the command by space and taking the KV which is
> splitting the free form query as otherwise its runs fine with the table
> parameter instead, but if I run the same command directly from the command
> line it works like a charm.
> wanted to know is there's something that I am missing while going this way,
> If no,
> then why is this issue hitting and what's the work around?
> Urgent Call!
> Thanks!

Sqoop Free Form Import Query Breaks off

2014-12-25 Thread Vineet Mishra
Hi All,

I am facing a issue while Sqoop(Sqoop version: 1.4.3-cdh4.7.0) Import, I am
having a Java threaded code to import data from multiple databases running
at different servers.

Currently I am doing a Java Process Execute something like to execute sqoop
job,

Runtime.getRuntime().exec("/usr/bin/sqoop import --driver
com.vertica.jdbc.Driver  --connect jdbc:vertica://host:port/db  --username
user --password pwd --query 'select * from table WHERE $CONDITIONS'
--split-by id --target-dir /user/hive/warehouse/data/db.db/table
--fields-terminated-by '\t' --hive-drop-import-delims -m 1")

I am executing the above command as it is and running into exception saying,

WARN tool.BaseSqoopTool: Setting your password on the command-line is
insecure. Consider using -P instead.
14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Error parsing arguments for
import:
14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: *
14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: from
14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: table
14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: WHERE
14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument:
$CONDITIONS'
14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument:
--split-by
14/12/25 18:38:29 ERROR tool.BaseSqoopTool: Unrecognized argument: id
.
.
.
Although I can easily understand the error and reason of the fact as sqoop
is internally splitting the command by space and taking the KV which is
splitting the free form query as otherwise its runs fine with the table
parameter instead, but if I run the same command directly from the command
line it works like a charm.

wanted to know is there's something that I am missing while going this way,

If no,

then why is this issue hitting and what's the work around?

Urgent Call!

Thanks!


Re: Cassandra update row after delete immediately, and read that, the data not right?

2014-12-25 Thread Jack Krupansky
What RF?

Is the update and read immediately after the delete and insert, or is the
read after doing all the updates?

Is the delete and insert done with a single batch?

-- Jack Krupansky

On Thu, Dec 25, 2014 at 4:14 AM,  wrote:

> Hi, all
>   I write a program to test the cassandra2.1. I have 6 nodes cluster.
>   First, I insert 1 million row data into cassandra. the row key from 1 to
> 100.
>
>   Then I run my test program. My test program first delete(use batch
> mutate) the row and insert (use batch mutate) that row,
>
>  then read (use gen_slice_range) the same row. After that check
> whether the read data is same with the insert data or not.
>   The consistency level used is quorum.
>
>   I found there some cases that not the same. About 1/1. In this error
> cases, some column is not same.
>
>   Then I use cassandra-cli to check the data, found that column is not
> exist. It seems insert partly.
>   My test program has 20 threads. the QPS 800 about
>
>   What's wrong with cassandra??
>
>
> Thanks!
>


Cassandra update row after delete immediately, and read that, the data not right?

2014-12-25 Thread yhqruc
Hi, all
  I write a program to test the cassandra2.1. I have 6 nodes cluster.
  First, I insert 1 million row data into cassandra. the row key from 1 to 
100.
  Then I run my test program. My test program first delete(use batch mutate) 
the row and insert (use batch mutate) that row, 
 then read (use gen_slice_range) the same row. After that check whether the 
read data is same with the insert data or not.  The consistency level used is 
quorum. 
  I found there some cases that not the same. About 1/1. In this error 
cases, some column is not same.  Then I use cassandra-cli to check the data, 
found that column is not exist. It seems insert partly.
  My test program has 20 threads. the QPS 800 about

  What's wrong with cassandra?? 
  

Thanks!