Re: Understanding which table had digest mismatch

2021-02-27 Thread Gil Ganz
Eric - I understand, thing is repairs are causing issues and I would like
to have the option to only run them on the tables that really need that.
Kane - Thanks, good idea, I will check that metric.



On Fri, Feb 26, 2021 at 12:07 AM Kane Wilson  wrote:

> You should be able to use the Table metric ReadRepairRequests to determine
> which table has read repairs occuring (fairly sure it's present on 3.11.
> See
> https://cassandra.apache.org/doc/latest/operating/metrics.html#table-metrics
>
> Cheers,
> Kane
>
> raft.so - Cassandra consulting, support, and managed services
>
>
> On Fri, Feb 26, 2021 at 8:12 AM Erick Ramirez 
> wrote:
>
>> Unfortunately, you won't be able to work it out just based on that debug
>> message. The only suggestion I have is to run repairs regularly. Cheers!
>>
>>>


Re: Understanding which table had digest mismatch

2021-02-25 Thread Kane Wilson
You should be able to use the Table metric ReadRepairRequests to determine
which table has read repairs occuring (fairly sure it's present on 3.11.
See
https://cassandra.apache.org/doc/latest/operating/metrics.html#table-metrics

Cheers,
Kane

raft.so - Cassandra consulting, support, and managed services


On Fri, Feb 26, 2021 at 8:12 AM Erick Ramirez 
wrote:

> Unfortunately, you won't be able to work it out just based on that debug
> message. The only suggestion I have is to run repairs regularly. Cheers!
>
>>


Re: Understanding which table had digest mismatch

2021-02-25 Thread Erick Ramirez
Unfortunately, you won't be able to work it out just based on that debug
message. The only suggestion I have is to run repairs regularly. Cheers!

>


Understanding which table had digest mismatch

2021-02-25 Thread Gil Ganz
Hey
I'm running cassandra 3.11.9 and I have a lot of messages like this:

DEBUG [ReadRepairStage:2] 2021-02-25 16:41:11,464 ReadCallback.java:244 -
Digest mismatch:
org.apache.cassandra.service.DigestMismatchException: Mismatch for key
DecoratedKey(4059620144736691554,
000455f1134b616e63656c61726961204c7563796665726100)
(0a30700ea31e8b75d454f4e7868b5fcb vs 5e8dd4f4468ebeb6e4e6998480c1931a)
at
org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:92)
~[apache-cassandra-3.11.9.jar:3.11.9]
at
org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:235)
~[apache-cassandra-3.11.9.jar:3.11.9]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[na:1.8.0_271]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[na:1.8.0_271]
at
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:84)
[apache-cassandra-3.11.9.jar:3.11.9]

I know my cluster has consistency issues from time to time, mostly due to
gc, I would like to find out which table is that.
Having the id does not help much, because many tables share the same id.
Is there any way to know which table has the issue?
Gil


Re: Digest mismatch

2020-12-14 Thread Joe Obernberger

Some more info.

From java using the Datastax 4.9.0 driver, I'm selecting an entire 
table, after about 17 million rows (the table is probably around 150 
million rows), I get:


com.datastax.oss.driver.api.core.servererrors.ReadFailureException: 
Cassandra failure during read query at consistency ONE (1 responses were 
required but only 0 replica responded, 1 failed)


It's almost as if the data was not written with LOCAL_QUORUM, but I've 
triple checked.


If I stop writes to the table and reduce the load on Cassandra, then it 
(java program) works OK.  Presto queries still fail, but that might be a 
Presto issue.  Interestingly they sometimes fail quickly, coming back 
with the 'Cassandra failure during read query' error very quickly, but 
sometimes go through 140 million rows and then die.


Are regular table repairs required to be run when using LOCAL_QUORUM?  I 
see no nodes down, or disk failures.


-Joe

On 12/14/2020 9:41 AM, Joe Obernberger wrote:


Thanks all for the help on this.  I've changed all my writes to 
LOCAL_QUORUM, and same with reads.  Under a constant load of doing 
writes to a table and reads from the same table, I'm still getting the:


DEBUG [ReadRepairStage:372] 2020-12-14 09:36:09,002 
ReadCallback.java:244 - Digest mismatch:
org.apache.cassandra.service.DigestMismatchException: Mismatch for key 
DecoratedKey(-7287062361589376757, 
44535f313034335f32353839305f323032302d31322d31325430302d31392d33312e3330335a) 
(054250ecd7170b1707ec36c6f1798ed0 vs 5752eec36bff050dd363b7803c500a95)
    at 
org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:92) 
~[apache-cassandra-3.11.9.jar:3.11.9]
    at 
org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:235) 
~[apache-cassandra-3.11.9.jar:3.11.9]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[na:1.8.0_272]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[na:1.8.0_272]
    at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:84) 
[apache-cassandra-3.11.9.jar:3.11.9]

    at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_272]

Under load this happens a lot; several times a second on each of the 
server nodes.  I started with a new table and under light load, it 
worked wonderfully - no issues.  But under heavy load, it still 
occurs.  Is there a different setting?
Also, when this happens, I cannot query the table from presto as I 
then get the familiar:


"Query 20201214_143949_0_b3fnt failed: Cassandra timeout during 
read query at consistency LOCAL_QUORUM (2 responses were required but 
only 1 replica responded)"


Changed presto to use ONE results in an error about 1 were required, 
but only 1 responded.


Any ideas?  Things to try?  Thanks!

-Joe

On 12/3/2020 12:49 AM, Erick Ramirez wrote:


Thank you Steve - once I have the key, how do I get to a node?

Run this command to determine which replicas own the partition:

$ nodetool getendpoints 

So if the propagation has not taken place and a node doesn't have
the data and is the first to 'be asked' the client will get no data?

That's correct. It will not return data it doesn't have when querying 
with a consistency of ONE. There are limited cases where ONE is 
applicable. In most cases, a strong consistency of LOCAL_QUORUM is 
recommended to avoid the scenario you described. Cheers!


<http://www.avg.com/email-signature?utm_medium=email_source=link_campaign=sig-email_content=emailclient> 
	Virus-free. www.avg.com 
<http://www.avg.com/email-signature?utm_medium=email_source=link_campaign=sig-email_content=emailclient> 



<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: Digest mismatch

2020-12-14 Thread Joe Obernberger
Thanks all for the help on this.  I've changed all my writes to 
LOCAL_QUORUM, and same with reads.  Under a constant load of doing 
writes to a table and reads from the same table, I'm still getting the:


DEBUG [ReadRepairStage:372] 2020-12-14 09:36:09,002 
ReadCallback.java:244 - Digest mismatch:
org.apache.cassandra.service.DigestMismatchException: Mismatch for key 
DecoratedKey(-7287062361589376757, 
44535f313034335f32353839305f323032302d31322d31325430302d31392d33312e3330335a) 
(054250ecd7170b1707ec36c6f1798ed0 vs 5752eec36bff050dd363b7803c500a95)
    at 
org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:92) 
~[apache-cassandra-3.11.9.jar:3.11.9]
    at 
org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:235) 
~[apache-cassandra-3.11.9.jar:3.11.9]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[na:1.8.0_272]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[na:1.8.0_272]
    at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:84) 
[apache-cassandra-3.11.9.jar:3.11.9]

    at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_272]

Under load this happens a lot; several times a second on each of the 
server nodes.  I started with a new table and under light load, it 
worked wonderfully - no issues.  But under heavy load, it still occurs.  
Is there a different setting?
Also, when this happens, I cannot query the table from presto as I then 
get the familiar:


"Query 20201214_143949_0_b3fnt failed: Cassandra timeout during read 
query at consistency LOCAL_QUORUM (2 responses were required but only 1 
replica responded)"


Changed presto to use ONE results in an error about 1 were required, but 
only 1 responded.


Any ideas?  Things to try?  Thanks!

-Joe

On 12/3/2020 12:49 AM, Erick Ramirez wrote:


Thank you Steve - once I have the key, how do I get to a node?

Run this command to determine which replicas own the partition:

$ nodetool getendpoints 

So if the propagation has not taken place and a node doesn't have
the data and is the first to 'be asked' the client will get no data?

That's correct. It will not return data it doesn't have when querying 
with a consistency of ONE. There are limited cases where ONE is 
applicable. In most cases, a strong consistency of LOCAL_QUORUM is 
recommended to avoid the scenario you described. Cheers!


<http://www.avg.com/email-signature?utm_medium=email_source=link_campaign=sig-email_content=emailclient> 
	Virus-free. www.avg.com 
<http://www.avg.com/email-signature?utm_medium=email_source=link_campaign=sig-email_content=emailclient> 



<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: Digest mismatch

2020-12-03 Thread Joe Obernberger
Thank you.  OK - I can see from 'nodetool getendpoints keyspace table 
key' that 3 nodes respond as one would expect.  My theory is that once I 
encounter the error, a read repair is triggered, and by the time I 
execute nodetool, 3 nodes respond.


I tried a test with the same table, but with LOCAL_QUORUM on reads and 
writes of new data, and it works.  Thank you all for that!  If I don't 
care which version of the data is returned, then I should be able to use 
ONE on reads, if LOCAL_QUORUM was used on writes - yes?


-Joe

On 12/3/2020 12:49 AM, Erick Ramirez wrote:


Thank you Steve - once I have the key, how do I get to a node?

Run this command to determine which replicas own the partition:

$ nodetool getendpoints 

So if the propagation has not taken place and a node doesn't have
the data and is the first to 'be asked' the client will get no data?

That's correct. It will not return data it doesn't have when querying 
with a consistency of ONE. There are limited cases where ONE is 
applicable. In most cases, a strong consistency of LOCAL_QUORUM is 
recommended to avoid the scenario you described. Cheers!


 
	Virus-free. www.avg.com 
 



<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: Digest mismatch

2020-12-02 Thread Erick Ramirez
>
> Thank you Steve - once I have the key, how do I get to a node?
>
Run this command to determine which replicas own the partition:

$ nodetool getendpoints 

> So if the propagation has not taken place and a node doesn't have the data
> and is the first to 'be asked' the client will get no data?
>
That's correct. It will not return data it doesn't have when querying with
a consistency of ONE. There are limited cases where ONE is applicable. In
most cases, a strong consistency of LOCAL_QUORUM is recommended to avoid
the scenario you described. Cheers!


Re: Digest mismatch

2020-12-02 Thread Joe Obernberger

Thank you Steve - once I have the key, how do I get to a node?

After reading some of the documentation, it looks like the 
load-balancing-policy below *is* a token aware policy.  Perhaps writes 
need to be done with QUORUM; I don't know how long Cassandra will take 
to make sure replicas are consistent when doing ONE for all writes.  So 
if the propagation has not taken place and a node doesn't have the data 
and is the first to 'be asked' the client will get no data?


-Joe

On 12/2/2020 2:09 PM, Steve Lacerda wrote:
If you can determine the key, then you can determine which nodes do 
and do not have the data. You may be able to glean a bit more 
information like that, maybe one node is having problems, versus 
entire cluster.


On Wed, Dec 2, 2020 at 9:32 AM Joe Obernberger 
mailto:joseph.obernber...@gmail.com>> 
wrote:


Clients are using an application.conf like:

datastax-java-driver {
  basic.request.timeout = 60 seconds
  basic.request.consistency = ONE
  basic.contact-points = ["172.16.110.3:9042
", "172.16.110.4:9042
", "172.16.100.208:9042
", "172.16.100.224:9042
", "172.16.100.225:9042
", "172.16.100.253:9042
", "172.16.100.254:9042
"]
  basic.load-balancing-policy {
    local-datacenter = datacenter1
  }
}

So no, I'm not using a token aware policy.  I'm googling that
now...cuz I don't know what it is!

-Joe

On 12/2/2020 12:18 PM, Carl Mueller wrote:

Are you using token aware policy for the driver?

If your writes are one and your reads are one, the propagation
may not have happened depending on the coordinator that is used.

TokenAware will make that a bit better.

On Wed, Dec 2, 2020 at 11:12 AM Joe Obernberger
mailto:joseph.obernber...@gmail.com>> wrote:

Hi Carl - thank you for replying.
I am using Cassandra 3.11.9-1

Rows are not typically being deleted - I assume you're
referring to Tombstones.  I don't think that should be the
case here as I don't think we've deleted anything here.
This is a test cluster and some of the machines are small
(hence the one node with 128 tokens and 14.6% - it has a lot
less disk space than the other nodes).  This is one of the
features that I really like with Cassandra - being able to
size nodes based on disk/CPU/RAM.

All data is currently written with ONE.  All data is read
with ONE.  I can replicate this issue at will, so can try
different things easily.  I tried changing the read process
to use QUORUM and the issue still takes place.  Right now I'm
running a 'nodetool repair' to see if that helps.  Our
largest table 'doc' has the following stats:

Table: doc
SSTable count: 28
Space used (live): 113609995010
Space used (total): 113609995010
Space used by snapshots (total): 0
Off heap memory used (total): 225006197
SSTable Compression Ratio: 0.37730474570644196
Number of partitions (estimate): 93641747
Memtable cell count: 0
Memtable data size: 0
Memtable off heap memory used: 0
Memtable switch count: 3712
Local read count: 891065091
Local read latency: NaN ms
Local write count: 7448281135
Local write latency: NaN ms
Pending flushes: 0
Percent repaired: 0.0
Bloom filter false positives: 988
Bloom filter false ratio: 0.1
Bloom filter space used: 151149880
Bloom filter off heap memory used: 151149656
Index summary off heap memory used: 38654701
Compression metadata off heap memory used: 35201840
Compacted partition minimum bytes: 104
Compacted partition maximum bytes: 3379391
Compacted partition mean bytes: 3389
Average live cells per slice (last five minutes): NaN
Maximum live cells per slice (last five minutes): 0
Average tombstones per slice (last five minutes): NaN
Maximum tombstones per slice (last five minutes): 0
Dropped Mutations: 8174438

Thoughts/ideas?  Thank you!

-Joe

On 12/2/2020 11:49 AM, Carl Mueller wrote:

Why is one of your nodes only at 14.6% ownership? That's
weird, unless you have a small rowcount.

Are you frequently deleting rows? Are you frequently writing
rows at ONE?

What version of cassandra?



On Wed, Dec 2, 2020 at 9:56 AM Joe Obernberger
mailto:joseph.obernber...@gmail.com>> wrote:

Hi All - this is my first post here.  I've been using
Cassandra for
several months now and am loving it.  We are moving from
Apache HBase 

Re: Digest mismatch

2020-12-02 Thread Steve Lacerda
If you can determine the key, then you can determine which nodes do and do
not have the data. You may be able to glean a bit more information like
that, maybe one node is having problems, versus entire cluster.

On Wed, Dec 2, 2020 at 9:32 AM Joe Obernberger 
wrote:

> Clients are using an application.conf like:
>
> datastax-java-driver {
>   basic.request.timeout = 60 seconds
>   basic.request.consistency = ONE
>   basic.contact-points = ["172.16.110.3:9042", "172.16.110.4:9042", "
> 172.16.100.208:9042", "172.16.100.224:9042", "172.16.100.225:9042", "
> 172.16.100.253:9042", "172.16.100.254:9042"]
>   basic.load-balancing-policy {
> local-datacenter = datacenter1
>   }
> }
>
> So no, I'm not using a token aware policy.  I'm googling that now...cuz I
> don't know what it is!
>
> -Joe
> On 12/2/2020 12:18 PM, Carl Mueller wrote:
>
> Are you using token aware policy for the driver?
>
> If your writes are one and your reads are one, the propagation may not
> have happened depending on the coordinator that is used.
>
> TokenAware will make that a bit better.
>
> On Wed, Dec 2, 2020 at 11:12 AM Joe Obernberger <
> joseph.obernber...@gmail.com> wrote:
>
>> Hi Carl - thank you for replying.
>> I am using Cassandra 3.11.9-1
>>
>> Rows are not typically being deleted - I assume you're referring to
>> Tombstones.  I don't think that should be the case here as I don't think
>> we've deleted anything here.
>> This is a test cluster and some of the machines are small (hence the one
>> node with 128 tokens and 14.6% - it has a lot less disk space than the
>> other nodes).  This is one of the features that I really like with
>> Cassandra - being able to size nodes based on disk/CPU/RAM.
>>
>> All data is currently written with ONE.  All data is read with ONE.  I
>> can replicate this issue at will, so can try different things easily.  I
>> tried changing the read process to use QUORUM and the issue still takes
>> place.  Right now I'm running a 'nodetool repair' to see if that helps.
>> Our largest table 'doc' has the following stats:
>>
>> Table: doc
>> SSTable count: 28
>> Space used (live): 113609995010
>> Space used (total): 113609995010
>> Space used by snapshots (total): 0
>> Off heap memory used (total): 225006197
>> SSTable Compression Ratio: 0.37730474570644196
>> Number of partitions (estimate): 93641747
>> Memtable cell count: 0
>> Memtable data size: 0
>> Memtable off heap memory used: 0
>> Memtable switch count: 3712
>> Local read count: 891065091
>> Local read latency: NaN ms
>> Local write count: 7448281135
>> Local write latency: NaN ms
>> Pending flushes: 0
>> Percent repaired: 0.0
>> Bloom filter false positives: 988
>> Bloom filter false ratio: 0.1
>> Bloom filter space used: 151149880
>> Bloom filter off heap memory used: 151149656
>> Index summary off heap memory used: 38654701
>> Compression metadata off heap memory used: 35201840
>> Compacted partition minimum bytes: 104
>> Compacted partition maximum bytes: 3379391
>> Compacted partition mean bytes: 3389
>> Average live cells per slice (last five minutes): NaN
>> Maximum live cells per slice (last five minutes): 0
>> Average tombstones per slice (last five minutes): NaN
>> Maximum tombstones per slice (last five minutes): 0
>> Dropped Mutations: 8174438
>>
>> Thoughts/ideas?  Thank you!
>>
>> -Joe
>> On 12/2/2020 11:49 AM, Carl Mueller wrote:
>>
>> Why is one of your nodes only at 14.6% ownership? That's weird, unless
>> you have a small rowcount.
>>
>> Are you frequently deleting rows? Are you frequently writing rows at ONE?
>>
>> What version of cassandra?
>>
>>
>>
>> On Wed, Dec 2, 2020 at 9:56 AM Joe Obernberger <
>> joseph.obernber...@gmail.com> wrote:
>>
>>> Hi All - this is my first post here.  I've been using Cassandra for
>>> several months now and am loving it.  We are moving from Apache HBase to
>>> Cassandra for a big data analytics platform.
>>>
>>> I'm using java to get rows from Cassandra and very frequently get a
>>> java.util.NoSuchElementException when iterating through a ResultSet.  If
>>> I retry this query again (often several times), it works.  The debug log
>>> on the Cassandra nodes show this message:
>>> org.apache.cassandra.service.DigestMismatchException: Mismatch for key
>>> DecoratedKey
>>>
>>> My cluster looks like this:
>>>
>>> Datacenter: datacenter1
>>> ===
>>> Status=Up/Down
>>> |/ State=Normal/Leaving/Joining/Moving
>>> --  Address Load   Tokens   Owns (effective)  Host
>>> ID   Rack
>>> UN  172.16.100.224  340.5 GiB  512  50.9%
>>> 8ba646ac-2b33-49de-a220-ae9842f18806  rack1
>>> UN  172.16.100.208  269.19 GiB  384  40.3%
>>> 4e0ba42f-649b-425a-857a-34497eb3036e  rack1
>>> UN  172.16.100.225  282.83 GiB  512  50.4%
>>> 247f3d70-d13b-4d68-9a53-2ed58e01a63e  rack1
>>> UN  172.16.110.3409.78 GiB  768  63.2%
>>> 0abea102-06d2-4309-af36-a3163e8f00d8  rack1
>>> UN  172.16.110.4330.15 GiB  512  

Re: Digest mismatch

2020-12-02 Thread Joe Obernberger

Clients are using an application.conf like:

datastax-java-driver {
  basic.request.timeout = 60 seconds
  basic.request.consistency = ONE
  basic.contact-points = ["172.16.110.3:9042", "172.16.110.4:9042", 
"172.16.100.208:9042", "172.16.100.224:9042", "172.16.100.225:9042", 
"172.16.100.253:9042", "172.16.100.254:9042"]

  basic.load-balancing-policy {
    local-datacenter = datacenter1
  }
}

So no, I'm not using a token aware policy.  I'm googling that now...cuz 
I don't know what it is!


-Joe

On 12/2/2020 12:18 PM, Carl Mueller wrote:

Are you using token aware policy for the driver?

If your writes are one and your reads are one, the propagation may not 
have happened depending on the coordinator that is used.


TokenAware will make that a bit better.

On Wed, Dec 2, 2020 at 11:12 AM Joe Obernberger 
mailto:joseph.obernber...@gmail.com>> 
wrote:


Hi Carl - thank you for replying.
I am using Cassandra 3.11.9-1

Rows are not typically being deleted - I assume you're referring
to Tombstones.  I don't think that should be the case here as I
don't think we've deleted anything here.
This is a test cluster and some of the machines are small (hence
the one node with 128 tokens and 14.6% - it has a lot less disk
space than the other nodes).  This is one of the features that I
really like with Cassandra - being able to size nodes based on
disk/CPU/RAM.

All data is currently written with ONE.  All data is read with
ONE.  I can replicate this issue at will, so can try different
things easily.  I tried changing the read process to use QUORUM
and the issue still takes place. Right now I'm running a 'nodetool
repair' to see if that helps.  Our largest table 'doc' has the
following stats:

Table: doc
SSTable count: 28
Space used (live): 113609995010
Space used (total): 113609995010
Space used by snapshots (total): 0
Off heap memory used (total): 225006197
SSTable Compression Ratio: 0.37730474570644196
Number of partitions (estimate): 93641747
Memtable cell count: 0
Memtable data size: 0
Memtable off heap memory used: 0
Memtable switch count: 3712
Local read count: 891065091
Local read latency: NaN ms
Local write count: 7448281135
Local write latency: NaN ms
Pending flushes: 0
Percent repaired: 0.0
Bloom filter false positives: 988
Bloom filter false ratio: 0.1
Bloom filter space used: 151149880
Bloom filter off heap memory used: 151149656
Index summary off heap memory used: 38654701
Compression metadata off heap memory used: 35201840
Compacted partition minimum bytes: 104
Compacted partition maximum bytes: 3379391
Compacted partition mean bytes: 3389
Average live cells per slice (last five minutes): NaN
Maximum live cells per slice (last five minutes): 0
Average tombstones per slice (last five minutes): NaN
Maximum tombstones per slice (last five minutes): 0
Dropped Mutations: 8174438

Thoughts/ideas?  Thank you!

-Joe

On 12/2/2020 11:49 AM, Carl Mueller wrote:

Why is one of your nodes only at 14.6% ownership? That's weird,
unless you have a small rowcount.

Are you frequently deleting rows? Are you frequently writing rows
at ONE?

What version of cassandra?



On Wed, Dec 2, 2020 at 9:56 AM Joe Obernberger
mailto:joseph.obernber...@gmail.com>> wrote:

Hi All - this is my first post here.  I've been using
Cassandra for
several months now and am loving it.  We are moving from
Apache HBase to
Cassandra for a big data analytics platform.

I'm using java to get rows from Cassandra and very frequently
get a
java.util.NoSuchElementException when iterating through a
ResultSet.  If
I retry this query again (often several times), it works. 
The debug log
on the Cassandra nodes show this message:
org.apache.cassandra.service.DigestMismatchException:
Mismatch for key
DecoratedKey

My cluster looks like this:

Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective) 
Host
ID   Rack
UN  172.16.100.224  340.5 GiB  512  50.9%
8ba646ac-2b33-49de-a220-ae9842f18806  rack1
UN  172.16.100.208  269.19 GiB  384  40.3%
4e0ba42f-649b-425a-857a-34497eb3036e  rack1
UN  172.16.100.225  282.83 GiB  512  50.4%
247f3d70-d13b-4d68-9a53-2ed58e01a63e  rack1
UN  172.16.110.3    409.78 GiB  768  63.2%
0abea102-06d2-4309-af36-a3163e8f00d8  rack1
UN  172.16.110.4    330.15 GiB  512  50.6%
2a5ae735-6304-4e99-924b-44d9d5ec86b7  rack1
UN  172.16.100.253  98.88 GiB  128   

Re: Digest mismatch

2020-12-02 Thread Joe Obernberger

Python eh?  What's that?  Kidding.  (Java guy over here...)

I grepped the logs for mutations but only see messages like:

2020-09-14 16:15:19,963 CommitLog.java:149 - Log replay complete, 0 
replayed mutations

and
2020-09-17 16:22:13,020 CommitLog.java:149 - Log replay complete, 291708 
replayed mutations


Typically, we read very soon after the write, which I thought was a 
problem also; however at this point it's been 24+ hours since the data 
has been written that I'm now trying to read.  Happens very easily.

By determining the partition key, how will that help?

-Joe

On 12/2/2020 12:16 PM, Steve Lacerda wrote:
The digest mismatch typically shows the partition key info, with 
something like this:


DecoratedKey(-1671292413668442751, 48343732322d3838353032)

That refers to the partition key, which you can gather like so:

python
import binascii
binascii.unhexlify('48343732322d3838353032')
'H4722-88502'

My assumption is that since you are reading and writing with one, that 
some nodes have the data and others don't. Are you seeing any dropped 
mutations in the logs? How long after the write are you attempting to 
read the same data?







On Wed, Dec 2, 2020 at 9:12 AM Joe Obernberger 
mailto:joseph.obernber...@gmail.com>> 
wrote:


Hi Carl - thank you for replying.
I am using Cassandra 3.11.9-1

Rows are not typically being deleted - I assume you're referring
to Tombstones.  I don't think that should be the case here as I
don't think we've deleted anything here.
This is a test cluster and some of the machines are small (hence
the one node with 128 tokens and 14.6% - it has a lot less disk
space than the other nodes).  This is one of the features that I
really like with Cassandra - being able to size nodes based on
disk/CPU/RAM.

All data is currently written with ONE.  All data is read with
ONE.  I can replicate this issue at will, so can try different
things easily.  I tried changing the read process to use QUORUM
and the issue still takes place. Right now I'm running a 'nodetool
repair' to see if that helps.  Our largest table 'doc' has the
following stats:

Table: doc
SSTable count: 28
Space used (live): 113609995010
Space used (total): 113609995010
Space used by snapshots (total): 0
Off heap memory used (total): 225006197
SSTable Compression Ratio: 0.37730474570644196
Number of partitions (estimate): 93641747
Memtable cell count: 0
Memtable data size: 0
Memtable off heap memory used: 0
Memtable switch count: 3712
Local read count: 891065091
Local read latency: NaN ms
Local write count: 7448281135
Local write latency: NaN ms
Pending flushes: 0
Percent repaired: 0.0
Bloom filter false positives: 988
Bloom filter false ratio: 0.1
Bloom filter space used: 151149880
Bloom filter off heap memory used: 151149656
Index summary off heap memory used: 38654701
Compression metadata off heap memory used: 35201840
Compacted partition minimum bytes: 104
Compacted partition maximum bytes: 3379391
Compacted partition mean bytes: 3389
Average live cells per slice (last five minutes): NaN
Maximum live cells per slice (last five minutes): 0
Average tombstones per slice (last five minutes): NaN
Maximum tombstones per slice (last five minutes): 0
Dropped Mutations: 8174438

Thoughts/ideas?  Thank you!

-Joe

On 12/2/2020 11:49 AM, Carl Mueller wrote:

Why is one of your nodes only at 14.6% ownership? That's weird,
unless you have a small rowcount.

Are you frequently deleting rows? Are you frequently writing rows
at ONE?

What version of cassandra?



On Wed, Dec 2, 2020 at 9:56 AM Joe Obernberger
mailto:joseph.obernber...@gmail.com>> wrote:

Hi All - this is my first post here.  I've been using
Cassandra for
several months now and am loving it.  We are moving from
Apache HBase to
Cassandra for a big data analytics platform.

I'm using java to get rows from Cassandra and very frequently
get a
java.util.NoSuchElementException when iterating through a
ResultSet.  If
I retry this query again (often several times), it works. 
The debug log
on the Cassandra nodes show this message:
org.apache.cassandra.service.DigestMismatchException:
Mismatch for key
DecoratedKey

My cluster looks like this:

Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective) 
Host
ID   Rack
UN  172.16.100.224  340.5 GiB  512  50.9%
8ba646ac-2b33-49de-a220-ae9842f18806  rack1
UN  172.16.100.208  269.19 GiB  384  40.3%
4e0ba42f-649b-425a-857a-

Re: Digest mismatch

2020-12-02 Thread Carl Mueller
Are you using token aware policy for the driver?

If your writes are one and your reads are one, the propagation may not have
happened depending on the coordinator that is used.

TokenAware will make that a bit better.

On Wed, Dec 2, 2020 at 11:12 AM Joe Obernberger <
joseph.obernber...@gmail.com> wrote:

> Hi Carl - thank you for replying.
> I am using Cassandra 3.11.9-1
>
> Rows are not typically being deleted - I assume you're referring to
> Tombstones.  I don't think that should be the case here as I don't think
> we've deleted anything here.
> This is a test cluster and some of the machines are small (hence the one
> node with 128 tokens and 14.6% - it has a lot less disk space than the
> other nodes).  This is one of the features that I really like with
> Cassandra - being able to size nodes based on disk/CPU/RAM.
>
> All data is currently written with ONE.  All data is read with ONE.  I can
> replicate this issue at will, so can try different things easily.  I tried
> changing the read process to use QUORUM and the issue still takes place.
> Right now I'm running a 'nodetool repair' to see if that helps.  Our
> largest table 'doc' has the following stats:
>
> Table: doc
> SSTable count: 28
> Space used (live): 113609995010
> Space used (total): 113609995010
> Space used by snapshots (total): 0
> Off heap memory used (total): 225006197
> SSTable Compression Ratio: 0.37730474570644196
> Number of partitions (estimate): 93641747
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 3712
> Local read count: 891065091
> Local read latency: NaN ms
> Local write count: 7448281135
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 0.0
> Bloom filter false positives: 988
> Bloom filter false ratio: 0.1
> Bloom filter space used: 151149880
> Bloom filter off heap memory used: 151149656
> Index summary off heap memory used: 38654701
> Compression metadata off heap memory used: 35201840
> Compacted partition minimum bytes: 104
> Compacted partition maximum bytes: 3379391
> Compacted partition mean bytes: 3389
> Average live cells per slice (last five minutes): NaN
> Maximum live cells per slice (last five minutes): 0
> Average tombstones per slice (last five minutes): NaN
> Maximum tombstones per slice (last five minutes): 0
> Dropped Mutations: 8174438
>
> Thoughts/ideas?  Thank you!
>
> -Joe
> On 12/2/2020 11:49 AM, Carl Mueller wrote:
>
> Why is one of your nodes only at 14.6% ownership? That's weird, unless you
> have a small rowcount.
>
> Are you frequently deleting rows? Are you frequently writing rows at ONE?
>
> What version of cassandra?
>
>
>
> On Wed, Dec 2, 2020 at 9:56 AM Joe Obernberger <
> joseph.obernber...@gmail.com> wrote:
>
>> Hi All - this is my first post here.  I've been using Cassandra for
>> several months now and am loving it.  We are moving from Apache HBase to
>> Cassandra for a big data analytics platform.
>>
>> I'm using java to get rows from Cassandra and very frequently get a
>> java.util.NoSuchElementException when iterating through a ResultSet.  If
>> I retry this query again (often several times), it works.  The debug log
>> on the Cassandra nodes show this message:
>> org.apache.cassandra.service.DigestMismatchException: Mismatch for key
>> DecoratedKey
>>
>> My cluster looks like this:
>>
>> Datacenter: datacenter1
>> ===
>> Status=Up/Down
>> |/ State=Normal/Leaving/Joining/Moving
>> --  Address Load   Tokens   Owns (effective)  Host
>> ID   Rack
>> UN  172.16.100.224  340.5 GiB  512  50.9%
>> 8ba646ac-2b33-49de-a220-ae9842f18806  rack1
>> UN  172.16.100.208  269.19 GiB  384  40.3%
>> 4e0ba42f-649b-425a-857a-34497eb3036e  rack1
>> UN  172.16.100.225  282.83 GiB  512  50.4%
>> 247f3d70-d13b-4d68-9a53-2ed58e01a63e  rack1
>> UN  172.16.110.3409.78 GiB  768  63.2%
>> 0abea102-06d2-4309-af36-a3163e8f00d8  rack1
>> UN  172.16.110.4330.15 GiB  512  50.6%
>> 2a5ae735-6304-4e99-924b-44d9d5ec86b7  rack1
>> UN  172.16.100.253  98.88 GiB  128  14.6%
>> 6b528b0b-d7f7-4378-bba8-1857802d4f18  rack1
>> UN  172.16.100.254  204.5 GiB  256  30.0%
>> 87d0cb48-a57d-460e-bd82-93e6e52e93ea  rack1
>>
>> I suspect this has to do with how I'm using consistency levels?
>> Typically I'm using ONE.  I just set the dclocal_read_repair_chance to
>> 0.0, but I'm still seeing the issue.  Any help/tips?
>>
>> Thank you!
>>
>> -Joe Obernberger
>>
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>
> 
>  Virus-free.
> www.avg.com
> 
> 

Re: Digest mismatch

2020-12-02 Thread Steve Lacerda
The digest mismatch typically shows the partition key info, with something
like this:

DecoratedKey(-1671292413668442751, 48343732322d3838353032)

That refers to the partition key, which you can gather like so:

python
import binascii
binascii.unhexlify('48343732322d3838353032')
'H4722-88502'

My assumption is that since you are reading and writing with one, that some
nodes have the data and others don't. Are you seeing any dropped mutations
in the logs? How long after the write are you attempting to read the same
data?






On Wed, Dec 2, 2020 at 9:12 AM Joe Obernberger 
wrote:

> Hi Carl - thank you for replying.
> I am using Cassandra 3.11.9-1
>
> Rows are not typically being deleted - I assume you're referring to
> Tombstones.  I don't think that should be the case here as I don't think
> we've deleted anything here.
> This is a test cluster and some of the machines are small (hence the one
> node with 128 tokens and 14.6% - it has a lot less disk space than the
> other nodes).  This is one of the features that I really like with
> Cassandra - being able to size nodes based on disk/CPU/RAM.
>
> All data is currently written with ONE.  All data is read with ONE.  I can
> replicate this issue at will, so can try different things easily.  I tried
> changing the read process to use QUORUM and the issue still takes place.
> Right now I'm running a 'nodetool repair' to see if that helps.  Our
> largest table 'doc' has the following stats:
>
> Table: doc
> SSTable count: 28
> Space used (live): 113609995010
> Space used (total): 113609995010
> Space used by snapshots (total): 0
> Off heap memory used (total): 225006197
> SSTable Compression Ratio: 0.37730474570644196
> Number of partitions (estimate): 93641747
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 3712
> Local read count: 891065091
> Local read latency: NaN ms
> Local write count: 7448281135
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 0.0
> Bloom filter false positives: 988
> Bloom filter false ratio: 0.1
> Bloom filter space used: 151149880
> Bloom filter off heap memory used: 151149656
> Index summary off heap memory used: 38654701
> Compression metadata off heap memory used: 35201840
> Compacted partition minimum bytes: 104
> Compacted partition maximum bytes: 3379391
> Compacted partition mean bytes: 3389
> Average live cells per slice (last five minutes): NaN
> Maximum live cells per slice (last five minutes): 0
> Average tombstones per slice (last five minutes): NaN
> Maximum tombstones per slice (last five minutes): 0
> Dropped Mutations: 8174438
>
> Thoughts/ideas?  Thank you!
>
> -Joe
> On 12/2/2020 11:49 AM, Carl Mueller wrote:
>
> Why is one of your nodes only at 14.6% ownership? That's weird, unless you
> have a small rowcount.
>
> Are you frequently deleting rows? Are you frequently writing rows at ONE?
>
> What version of cassandra?
>
>
>
> On Wed, Dec 2, 2020 at 9:56 AM Joe Obernberger <
> joseph.obernber...@gmail.com> wrote:
>
>> Hi All - this is my first post here.  I've been using Cassandra for
>> several months now and am loving it.  We are moving from Apache HBase to
>> Cassandra for a big data analytics platform.
>>
>> I'm using java to get rows from Cassandra and very frequently get a
>> java.util.NoSuchElementException when iterating through a ResultSet.  If
>> I retry this query again (often several times), it works.  The debug log
>> on the Cassandra nodes show this message:
>> org.apache.cassandra.service.DigestMismatchException: Mismatch for key
>> DecoratedKey
>>
>> My cluster looks like this:
>>
>> Datacenter: datacenter1
>> ===
>> Status=Up/Down
>> |/ State=Normal/Leaving/Joining/Moving
>> --  Address Load   Tokens   Owns (effective)  Host
>> ID   Rack
>> UN  172.16.100.224  340.5 GiB  512  50.9%
>> 8ba646ac-2b33-49de-a220-ae9842f18806  rack1
>> UN  172.16.100.208  269.19 GiB  384  40.3%
>> 4e0ba42f-649b-425a-857a-34497eb3036e  rack1
>> UN  172.16.100.225  282.83 GiB  512  50.4%
>> 247f3d70-d13b-4d68-9a53-2ed58e01a63e  rack1
>> UN  172.16.110.3409.78 GiB  768  63.2%
>> 0abea102-06d2-4309-af36-a3163e8f00d8  rack1
>> UN  172.16.110.4330.15 GiB  512  50.6%
>> 2a5ae735-6304-4e99-924b-44d9d5ec86b7  rack1
>> UN  172.16.100.253  98.88 GiB  128  14.6%
>> 6b528b0b-d7f7-4378-bba8-1857802d4f18  rack1
>> UN  172.16.100.254  204.5 GiB  256  30.0%
>> 87d0cb48-a57d-460e-bd82-93e6e52e93ea  rack1
>>
>> I s

Re: Digest mismatch

2020-12-02 Thread Joe Obernberger

Hi Carl - thank you for replying.
I am using Cassandra 3.11.9-1

Rows are not typically being deleted - I assume you're referring to 
Tombstones.  I don't think that should be the case here as I don't think 
we've deleted anything here.
This is a test cluster and some of the machines are small (hence the one 
node with 128 tokens and 14.6% - it has a lot less disk space than the 
other nodes).  This is one of the features that I really like with 
Cassandra - being able to size nodes based on disk/CPU/RAM.


All data is currently written with ONE.  All data is read with ONE.  I 
can replicate this issue at will, so can try different things easily.  I 
tried changing the read process to use QUORUM and the issue still takes 
place.  Right now I'm running a 'nodetool repair' to see if that helps.  
Our largest table 'doc' has the following stats:


Table: doc
SSTable count: 28
Space used (live): 113609995010
Space used (total): 113609995010
Space used by snapshots (total): 0
Off heap memory used (total): 225006197
SSTable Compression Ratio: 0.37730474570644196
Number of partitions (estimate): 93641747
Memtable cell count: 0
Memtable data size: 0
Memtable off heap memory used: 0
Memtable switch count: 3712
Local read count: 891065091
Local read latency: NaN ms
Local write count: 7448281135
Local write latency: NaN ms
Pending flushes: 0
Percent repaired: 0.0
Bloom filter false positives: 988
Bloom filter false ratio: 0.1
Bloom filter space used: 151149880
Bloom filter off heap memory used: 151149656
Index summary off heap memory used: 38654701
Compression metadata off heap memory used: 35201840
Compacted partition minimum bytes: 104
Compacted partition maximum bytes: 3379391
Compacted partition mean bytes: 3389
Average live cells per slice (last five minutes): NaN
Maximum live cells per slice (last five minutes): 0
Average tombstones per slice (last five minutes): NaN
Maximum tombstones per slice (last five minutes): 0
Dropped Mutations: 8174438

Thoughts/ideas?  Thank you!

-Joe

On 12/2/2020 11:49 AM, Carl Mueller wrote:
Why is one of your nodes only at 14.6% ownership? That's weird, unless 
you have a small rowcount.


Are you frequently deleting rows? Are you frequently writing rows at ONE?

What version of cassandra?



On Wed, Dec 2, 2020 at 9:56 AM Joe Obernberger 
mailto:joseph.obernber...@gmail.com>> 
wrote:


Hi All - this is my first post here.  I've been using Cassandra for
several months now and am loving it.  We are moving from Apache
HBase to
Cassandra for a big data analytics platform.

I'm using java to get rows from Cassandra and very frequently get a
java.util.NoSuchElementException when iterating through a
ResultSet.  If
I retry this query again (often several times), it works.  The
debug log
on the Cassandra nodes show this message:
org.apache.cassandra.service.DigestMismatchException: Mismatch for
key
DecoratedKey

My cluster looks like this:

Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective) Host
ID   Rack
UN  172.16.100.224  340.5 GiB  512  50.9%
8ba646ac-2b33-49de-a220-ae9842f18806  rack1
UN  172.16.100.208  269.19 GiB  384  40.3%
4e0ba42f-649b-425a-857a-34497eb3036e  rack1
UN  172.16.100.225  282.83 GiB  512  50.4%
247f3d70-d13b-4d68-9a53-2ed58e01a63e  rack1
UN  172.16.110.3    409.78 GiB  768  63.2%
0abea102-06d2-4309-af36-a3163e8f00d8  rack1
UN  172.16.110.4    330.15 GiB  512  50.6%
2a5ae735-6304-4e99-924b-44d9d5ec86b7  rack1
UN  172.16.100.253  98.88 GiB  128  14.6%
6b528b0b-d7f7-4378-bba8-1857802d4f18  rack1
UN  172.16.100.254  204.5 GiB  256  30.0%
87d0cb48-a57d-460e-bd82-93e6e52e93ea  rack1

I suspect this has to do with how I'm using consistency levels?
Typically I'm using ONE.  I just set the
dclocal_read_repair_chance to
0.0, but I'm still seeing the issue.  Any help/tips?

Thank you!

-Joe Obernberger


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org

For additional commands, e-mail: user-h...@cassandra.apache.org



 
	Virus-free. www.avg.com 
 



<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: Digest mismatch

2020-12-02 Thread Carl Mueller
Why is one of your nodes only at 14.6% ownership? That's weird, unless you
have a small rowcount.

Are you frequently deleting rows? Are you frequently writing rows at ONE?

What version of cassandra?



On Wed, Dec 2, 2020 at 9:56 AM Joe Obernberger 
wrote:

> Hi All - this is my first post here.  I've been using Cassandra for
> several months now and am loving it.  We are moving from Apache HBase to
> Cassandra for a big data analytics platform.
>
> I'm using java to get rows from Cassandra and very frequently get a
> java.util.NoSuchElementException when iterating through a ResultSet.  If
> I retry this query again (often several times), it works.  The debug log
> on the Cassandra nodes show this message:
> org.apache.cassandra.service.DigestMismatchException: Mismatch for key
> DecoratedKey
>
> My cluster looks like this:
>
> Datacenter: datacenter1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens   Owns (effective)  Host
> ID   Rack
> UN  172.16.100.224  340.5 GiB  512  50.9%
> 8ba646ac-2b33-49de-a220-ae9842f18806  rack1
> UN  172.16.100.208  269.19 GiB  384  40.3%
> 4e0ba42f-649b-425a-857a-34497eb3036e  rack1
> UN  172.16.100.225  282.83 GiB  512  50.4%
> 247f3d70-d13b-4d68-9a53-2ed58e01a63e  rack1
> UN  172.16.110.3409.78 GiB  768  63.2%
> 0abea102-06d2-4309-af36-a3163e8f00d8  rack1
> UN  172.16.110.4330.15 GiB  512  50.6%
> 2a5ae735-6304-4e99-924b-44d9d5ec86b7  rack1
> UN  172.16.100.253  98.88 GiB  128  14.6%
> 6b528b0b-d7f7-4378-bba8-1857802d4f18  rack1
> UN  172.16.100.254  204.5 GiB  256  30.0%
> 87d0cb48-a57d-460e-bd82-93e6e52e93ea  rack1
>
> I suspect this has to do with how I'm using consistency levels?
> Typically I'm using ONE.  I just set the dclocal_read_repair_chance to
> 0.0, but I'm still seeing the issue.  Any help/tips?
>
> Thank you!
>
> -Joe Obernberger
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Digest mismatch

2020-12-02 Thread Joe Obernberger
Hi All - this is my first post here.  I've been using Cassandra for 
several months now and am loving it.  We are moving from Apache HBase to 
Cassandra for a big data analytics platform.


I'm using java to get rows from Cassandra and very frequently get a 
java.util.NoSuchElementException when iterating through a ResultSet.  If 
I retry this query again (often several times), it works.  The debug log 
on the Cassandra nodes show this message:
org.apache.cassandra.service.DigestMismatchException: Mismatch for key 
DecoratedKey


My cluster looks like this:

Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective)  Host 
ID   Rack
UN  172.16.100.224  340.5 GiB  512  50.9% 
8ba646ac-2b33-49de-a220-ae9842f18806  rack1
UN  172.16.100.208  269.19 GiB  384  40.3% 
4e0ba42f-649b-425a-857a-34497eb3036e  rack1
UN  172.16.100.225  282.83 GiB  512  50.4% 
247f3d70-d13b-4d68-9a53-2ed58e01a63e  rack1
UN  172.16.110.3    409.78 GiB  768  63.2% 
0abea102-06d2-4309-af36-a3163e8f00d8  rack1
UN  172.16.110.4    330.15 GiB  512  50.6% 
2a5ae735-6304-4e99-924b-44d9d5ec86b7  rack1
UN  172.16.100.253  98.88 GiB  128  14.6% 
6b528b0b-d7f7-4378-bba8-1857802d4f18  rack1
UN  172.16.100.254  204.5 GiB  256  30.0% 
87d0cb48-a57d-460e-bd82-93e6e52e93ea  rack1


I suspect this has to do with how I'm using consistency levels? 
Typically I'm using ONE.  I just set the dclocal_read_repair_chance to 
0.0, but I'm still seeing the issue.  Any help/tips?


Thank you!

-Joe Obernberger


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Digest mismatch

2017-02-14 Thread Gopal, Dhruva
I’m running into a situation where I’m seeing a lot of Digest errors in the 
debug log. I looked at this post: 
http://stackoverflow.com/questions/39765813/datastax-mismatch-for-key-issue and 
verified that read_repair_chance is set to 0. We are using
DateTieredCompactionStrategy for our time series tables. Can I get some 
pointers on how to track this down?

DEBUG [ReadRepairStage:25] 2017-02-14 17:45:03,256 ReadCallback.java:235 - 
Digest mismatch:
org.apache.cassandra.service.DigestMismatchException: Mismatch for key 
DecoratedKey(-1025043841859687448, 
003e2f66736d732f696e746572616374696f6e732f627275696e732d36393564363866382d633935392d343737622d623230342d6566373665336464656165350800)
 (d41d8cd98f00b204e9800998ecf8427e vs dabf6fac8cf82262b31514aa719434b7)
at 
org.apache.cassandra.service.DigestResolver.resolve(DigestResolver.java:85) 
~[apache-cassandra-3.9.0.jar:3.9.0]
at 
org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:226)
 ~[apache-cassandra-3.9.0.jar:3.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_111]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]


Regards,
DHRUVA GOPAL
sr. MANAGER, ENGINEERING
REPORTING, ANALYTICS AND BIG DATA
+1 408.325.2011 WORK
+1 408.219.1094 MOBILE
UNITED STATES
dhruva.go...@aspect.com<mailto:dhruva.go...@aspect.com>
aspect.com<http://www.aspect.com/>
[escription: http://webapp2.aspect.com/EmailSigLogo-rev.jpg]

This email (including any attachments) is proprietary to Aspect Software, Inc. 
and may contain information that is confidential. If you have received this 
message in error, please do not read, copy or forward this message. Please 
notify the sender immediately, delete it from your system and destroy any 
copies. You may not further disclose or distribute this email or its 
attachments.


Re: Failed to solve Digest mismatch

2013-10-09 Thread Jason Tang
I did some test on this issue, and it turns out the problem caused by local
time stamp.
In our traffic, the update and delete happened very fast, within 1 seconds,
even within 100ms.
And at that time, the ntp service seems not work well, the offset is same
times even larger then 1 second.

Then the some delete time stamp is before the create time stamp, so
when do mismatch
resolve, the result is not correct.


2012/7/4 aaron morton aa...@thelastpickle.com

 Jason,
 Are you able document the steps to reproduce this on a clean install ?

 Is so do you have time to create an issue on
 https://issues.apache.org/jira/browse/CASSANDRA

 Thanks


 -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 2/07/2012, at 1:49 AM, Jason Tang wrote:

 For the create/update/deleteColumn/deleteRow test case, for Quorum
 consistency level, 6 nodes, replicate factor 3, for one thread around 1/100
 round, I can have this reproduced.

 And if I have 20 client threads to run the test client, the ratio is
 bigger.

 And the test group will be executed by one thread, and the client time
 stamp is unique and sequenced, guaranteed by Hector.

 And client only access the data from local Cassandra.

 And the query only use the row key which is unique. The column name is not
 unique, in my case, eg, status.

 And the row have around 7 columns, which are all not big, eg
 status:true, userName:Jason ...

 BRs
 //Ares

 2012/7/1 Jonathan Ellis jbel...@gmail.com

 Is this Cassandra 1.1.1?

 How often do you observe this?  How many columns are in the row?  Can
 you reproduce when querying by column name, or only when slicing the
 row?

 On Thu, Jun 28, 2012 at 7:24 AM, Jason Tang ares.t...@gmail.com wrote:
  Hi
 
 First I delete one column, then I delete one row. Then try to read
 all
  columns from the same row, all operations from same client app.
 
 The consistency level is read/write quorum.
 
 Check the Cassandra log, the local node don't perform the delete
  operation but send the mutation to other nodes (192.168.0.6,
 192.168.0.1)
 
 After delete, I try to read all columns from the row, I found the
 node
  found Digest mismatch due to Quorum consistency configuration, but the
  result is not correct.
 
 From the log, I can see the delete mutation already accepted
  by 192.168.0.6, 192.168.0.1,  but when 192.168.0.5 read response from
 0.6
  and 0.1, and then it merge the data, but finally 0.5 shows the result
 which
  is the dirty data.
 
 Following logs shows the change of column 737461747573 ,
 192.168.0.5
  try to read from 0.1 and 0.6, it should be deleted, but finally it
 shows it
  has the data.
 
  log:
  192.168.0.5
  DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 653)
  Command/ConsistencyLevel is SliceByNamesReadCommand(table='drc',
  key=7878323239537570657254616e67307878,
  columnParent='QueryPath(columnFamilyName='queue',
 superColumnName='null',
  columnName='null')',
 
 columns=[6578656375746554696d65,6669726554696d65,67726f75705f6964,696e517565756554696d65,6c6f67526f6f744964,6d6f54797065,706172746974696f6e,7265636569766554696d65,72657175657374,7265747279,7365727669636550726f7669646572,737461747573,757365724e616d65,])/QUORUM
  DEBUG [Thrift:17] 2012-06-28 15:59:42,198 ReadCallback.java (line 79)
  Blockfor is 2; setting up requests to /192.168.0.6,/192.168.0.1
  DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 674)
  reading data from /192.168.0.6
  DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 694)
  reading digest from /192.168.0.1
  DEBUG [RequestResponseStage:2] 2012-06-28 15:59:42,199
  ResponseVerbHandler.java (line 44) Processing response on a callback
 from
  6556@/192.168.0.6
  DEBUG [RequestResponseStage:2] 2012-06-28 15:59:42,199
  AbstractRowResolver.java (line 66) Preprocessed data response
  DEBUG [RequestResponseStage:6] 2012-06-28 15:59:42,199
  ResponseVerbHandler.java (line 44) Processing response on a callback
 from
  6557@/192.168.0.1
  DEBUG [RequestResponseStage:6] 2012-06-28 15:59:42,199
  AbstractRowResolver.java (line 66) Preprocessed digest response
  DEBUG [Thrift:17] 2012-06-28 15:59:42,199 RowDigestResolver.java (line
 65)
  resolving 2 responses
  DEBUG [Thrift:17] 2012-06-28 15:59:42,200 StorageProxy.java (line 733)
  Digest mismatch: org.apache.cassandra.service.DigestMismatchException:
  Mismatch for key DecoratedKey(100572974179274741747356988451225858264,
  7878323239537570657254616e67307878) (b725ab25696111be49aaa7c4b7afa52d vs
  d41d8cd98f00b204e9800998ecf8427e)
  DEBUG [RequestResponseStage:9] 2012-06-28 15:59:42,201
  ResponseVerbHandler.java (line 44) Processing response on a callback
 from
  6558@/192.168.0.6
  DEBUG [RequestResponseStage:7] 2012-06-28 15:59:42,201
  ResponseVerbHandler.java (line 44) Processing response on a callback
 from
  6559@/192.168.0.1
  DEBUG [RequestResponseStage:9] 2012-06-28 15:59:42,201
  AbstractRowResolver.java (line 66

Re: Consistent problem when solve Digest mismatch

2013-03-06 Thread Jason Tang
Actually I didn't concurrent update the same records, because I first
create it, then search it, then delete it. The version conflict solved
failed, due to delete local time stamp is earlier then create local time
stamp.


2013/3/6 aaron morton aa...@thelastpickle.com

 Otherwise, it means the version conflict solving strong depends on global
 sequence id (timestamp) which need provide by client ?

 Yes.
 If you have an  area of your data model that has a high degree of
 concurrency C* may not be the right match.

 In 1.1 we have atomic updates so clients see either the entire write or
 none of it. And sometimes you can design a data model that does mutate
 shared values, but writes ledger entries instead. See Matt Denis talk here
 http://www.datastax.com/events/cassandrasummit2012/presentations or this
 post http://thelastpickle.com/2012/08/18/Sorting-Lists-For-Humans/

 Cheers

 -
 Aaron Morton
 Freelance Cassandra Developer
 New Zealand

 @aaronmorton
 http://www.thelastpickle.com

 On 4/03/2013, at 4:30 PM, Jason Tang ares.t...@gmail.com wrote:

 Hi

 The timestamp provided by my client is unix timestamp (with ntp), and as I
 said, due to the ntp drift, the local unix timestamp is not accurately
 synchronized (compare to my case).

 So for short, client can not provide global sequence number to indicate
 the event order.

 But I wonder, I configured Cassandra consistency level as write QUORUM. So
 for one record, I suppose Cassandra has the ability to decide the final
 update results.

 Otherwise, it means the version conflict solving strong depends on global
 sequence id (timestamp) which need provide by client ?


 //Tang


 2013/3/4 Sylvain Lebresne sylv...@datastax.com

 The problem is, what is the sequence number you are talking about is
 exactly?

 Or let me put it another way: if you do have a sequence number that
 provides a total ordering of your operation, then that is exactly what you
 should use as your timestamp. What Cassandra calls the timestamp, is
 exactly what you call seqID, it's the number Cassandra uses to decide the
 order of operation.

 Except that in real life, provided you have more than one client talking
 to Cassandra, then providing a total ordering of operation is hard, and in
 fact not doable efficiently. So in practice, people use unix timestamp
 (with ntp) which provide a very good while cheap approximation of the real
 life order of operations.

 But again, if you do know how to assign a more precise timestamp,
 Cassandra let you use that: you can provid your own timestamp (using unix
 timestamp is just the default). The point being, unix timestamp is the
 better approximation we have in practice.

 --
 Sylvain


 On Mon, Mar 4, 2013 at 9:26 AM, Jason Tang ares.t...@gmail.com wrote:

 Hi

   Previous I met a consistency problem, you can refer the link below for
 the whole story.

 http://mail-archives.apache.org/mod_mbox/cassandra-user/201206.mbox/%3CCAFb+LUxna0jiY0V=AvXKzUdxSjApYm4zWk=ka9ljm-txc04...@mail.gmail.com%3E

   And after check the code, seems I found some clue of the problem.
 Maybe some one can check this.

   For short, I have Cassandra cluster (1.0.3), The consistency level is
 read/write quorum, replication_factor is 3.

   Here is event sequence:

 seqID   NodeA   NodeB   NodeC
 1. New  New   New
 2. Update  Update   Update
 3. Delete   Delete

 When try to read from NodeB and NodeC, Digest mismatch exception
 triggered, so Cassandra try to resolve this version conflict.
 But the result is value Update.

 Here is the suspect root cause, the version conflict resolved based
 on time stamp.

 Node C local time is a bit earlier then node A.

 Update requests sent from node C with time stamp 00:00:00.050,
 Delete sent from node A with time stamp 00:00:00.020, which is not same
 as the event sequence.

 So the version conflict resolved incorrectly.

 It is true?

 If Yes, then it means, consistency level can secure the conflict been
 found, but to solve it correctly, dependence one time synchronization's
 accuracy, e.g. NTP ?








Re: Consistent problem when solve Digest mismatch

2013-03-05 Thread aaron morton
 Otherwise, it means the version conflict solving strong depends on global 
 sequence id (timestamp) which need provide by client ?
Yes. 
If you have an  area of your data model that has a high degree of concurrency 
C* may not be the right match.

In 1.1 we have atomic updates so clients see either the entire write or none of 
it. And sometimes you can design a data model that does mutate shared values, 
but writes ledger entries instead. See Matt Denis talk here 
http://www.datastax.com/events/cassandrasummit2012/presentations or this post 
http://thelastpickle.com/2012/08/18/Sorting-Lists-For-Humans/

Cheers

-
Aaron Morton
Freelance Cassandra Developer
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 4/03/2013, at 4:30 PM, Jason Tang ares.t...@gmail.com wrote:

 Hi 
 
 The timestamp provided by my client is unix timestamp (with ntp), and as I 
 said, due to the ntp drift, the local unix timestamp is not accurately 
 synchronized (compare to my case).
 
 So for short, client can not provide global sequence number to indicate the 
 event order.
 
 But I wonder, I configured Cassandra consistency level as write QUORUM. So 
 for one record, I suppose Cassandra has the ability to decide the final 
 update results.
 
 Otherwise, it means the version conflict solving strong depends on global 
 sequence id (timestamp) which need provide by client ?
 
 
 //Tang
 
 
 2013/3/4 Sylvain Lebresne sylv...@datastax.com
 The problem is, what is the sequence number you are talking about is exactly?
 
 Or let me put it another way: if you do have a sequence number that provides 
 a total ordering of your operation, then that is exactly what you should use 
 as your timestamp. What Cassandra calls the timestamp, is exactly what you 
 call seqID, it's the number Cassandra uses to decide the order of operation.
 
 Except that in real life, provided you have more than one client talking to 
 Cassandra, then providing a total ordering of operation is hard, and in fact 
 not doable efficiently. So in practice, people use unix timestamp (with ntp) 
 which provide a very good while cheap approximation of the real life order of 
 operations.
 
 But again, if you do know how to assign a more precise timestamp, Cassandra 
 let you use that: you can provid your own timestamp (using unix timestamp is 
 just the default). The point being, unix timestamp is the better 
 approximation we have in practice.
 
 --
 Sylvain
 
 
 On Mon, Mar 4, 2013 at 9:26 AM, Jason Tang ares.t...@gmail.com wrote:
 Hi
 
   Previous I met a consistency problem, you can refer the link below for the 
 whole story.
 http://mail-archives.apache.org/mod_mbox/cassandra-user/201206.mbox/%3CCAFb+LUxna0jiY0V=AvXKzUdxSjApYm4zWk=ka9ljm-txc04...@mail.gmail.com%3E
 
   And after check the code, seems I found some clue of the problem. Maybe 
 some one can check this.
 
   For short, I have Cassandra cluster (1.0.3), The consistency level is 
 read/write quorum, replication_factor is 3. 
 
   Here is event sequence:
 
 seqID   NodeA   NodeB   NodeC
 1. New  New   New
 2. Update  Update   Update
 3. Delete   Delete
 
 When try to read from NodeB and NodeC, Digest mismatch exception triggered, 
 so Cassandra try to resolve this version conflict.
 But the result is value Update.
 
 Here is the suspect root cause, the version conflict resolved based on time 
 stamp.
 
 Node C local time is a bit earlier then node A.
 
 Update requests sent from node C with time stamp 00:00:00.050, Delete 
 sent from node A with time stamp 00:00:00.020, which is not same as the event 
 sequence.
 
 So the version conflict resolved incorrectly.
 
 It is true?
 
 If Yes, then it means, consistency level can secure the conflict been found, 
 but to solve it correctly, dependence one time synchronization's accuracy, 
 e.g. NTP ?
 
 
 
 



Re: Consistent problem when solve Digest mismatch

2013-03-04 Thread Sylvain Lebresne
The problem is, what is the sequence number you are talking about is
exactly?

Or let me put it another way: if you do have a sequence number that
provides a total ordering of your operation, then that is exactly what you
should use as your timestamp. What Cassandra calls the timestamp, is
exactly what you call seqID, it's the number Cassandra uses to decide the
order of operation.

Except that in real life, provided you have more than one client talking to
Cassandra, then providing a total ordering of operation is hard, and in
fact not doable efficiently. So in practice, people use unix timestamp
(with ntp) which provide a very good while cheap approximation of the real
life order of operations.

But again, if you do know how to assign a more precise timestamp,
Cassandra let you use that: you can provid your own timestamp (using unix
timestamp is just the default). The point being, unix timestamp is the
better approximation we have in practice.

--
Sylvain


On Mon, Mar 4, 2013 at 9:26 AM, Jason Tang ares.t...@gmail.com wrote:

 Hi

   Previous I met a consistency problem, you can refer the link below for
 the whole story.

 http://mail-archives.apache.org/mod_mbox/cassandra-user/201206.mbox/%3CCAFb+LUxna0jiY0V=AvXKzUdxSjApYm4zWk=ka9ljm-txc04...@mail.gmail.com%3E

   And after check the code, seems I found some clue of the problem. Maybe
 some one can check this.

   For short, I have Cassandra cluster (1.0.3), The consistency level is
 read/write quorum, replication_factor is 3.

   Here is event sequence:

 seqID   NodeA   NodeB   NodeC
 1. New  New   New
 2. Update  Update   Update
 3. Delete   Delete

 When try to read from NodeB and NodeC, Digest mismatch exception
 triggered, so Cassandra try to resolve this version conflict.
 But the result is value Update.

 Here is the suspect root cause, the version conflict resolved based
 on time stamp.

 Node C local time is a bit earlier then node A.

 Update requests sent from node C with time stamp 00:00:00.050, Delete
 sent from node A with time stamp 00:00:00.020, which is not same as the
 event sequence.

 So the version conflict resolved incorrectly.

 It is true?

 If Yes, then it means, consistency level can secure the conflict been
 found, but to solve it correctly, dependence one time synchronization's
 accuracy, e.g. NTP ?





Re: Consistent problem when solve Digest mismatch

2013-03-04 Thread Jason Tang
Hi

The timestamp provided by my client is unix timestamp (with ntp), and as I
said, due to the ntp drift, the local unix timestamp is not accurately
synchronized (compare to my case).

So for short, client can not provide global sequence number to indicate the
event order.

But I wonder, I configured Cassandra consistency level as write QUORUM. So
for one record, I suppose Cassandra has the ability to decide the final
update results.

Otherwise, it means the version conflict solving strong depends on global
sequence id (timestamp) which need provide by client ?


//Tang


2013/3/4 Sylvain Lebresne sylv...@datastax.com

 The problem is, what is the sequence number you are talking about is
 exactly?

 Or let me put it another way: if you do have a sequence number that
 provides a total ordering of your operation, then that is exactly what you
 should use as your timestamp. What Cassandra calls the timestamp, is
 exactly what you call seqID, it's the number Cassandra uses to decide the
 order of operation.

 Except that in real life, provided you have more than one client talking
 to Cassandra, then providing a total ordering of operation is hard, and in
 fact not doable efficiently. So in practice, people use unix timestamp
 (with ntp) which provide a very good while cheap approximation of the real
 life order of operations.

 But again, if you do know how to assign a more precise timestamp,
 Cassandra let you use that: you can provid your own timestamp (using unix
 timestamp is just the default). The point being, unix timestamp is the
 better approximation we have in practice.

 --
 Sylvain


 On Mon, Mar 4, 2013 at 9:26 AM, Jason Tang ares.t...@gmail.com wrote:

 Hi

   Previous I met a consistency problem, you can refer the link below for
 the whole story.

 http://mail-archives.apache.org/mod_mbox/cassandra-user/201206.mbox/%3CCAFb+LUxna0jiY0V=AvXKzUdxSjApYm4zWk=ka9ljm-txc04...@mail.gmail.com%3E

   And after check the code, seems I found some clue of the problem. Maybe
 some one can check this.

   For short, I have Cassandra cluster (1.0.3), The consistency level is
 read/write quorum, replication_factor is 3.

   Here is event sequence:

 seqID   NodeA   NodeB   NodeC
 1. New  New   New
 2. Update  Update   Update
 3. Delete   Delete

 When try to read from NodeB and NodeC, Digest mismatch exception
 triggered, so Cassandra try to resolve this version conflict.
 But the result is value Update.

 Here is the suspect root cause, the version conflict resolved based
 on time stamp.

 Node C local time is a bit earlier then node A.

 Update requests sent from node C with time stamp 00:00:00.050, Delete
 sent from node A with time stamp 00:00:00.020, which is not same as the
 event sequence.

 So the version conflict resolved incorrectly.

 It is true?

 If Yes, then it means, consistency level can secure the conflict been
 found, but to solve it correctly, dependence one time synchronization's
 accuracy, e.g. NTP ?






Re: Read during digest mismatch

2012-11-13 Thread Manu Zhang
If consistency is two, don't we just send data request to one and digest
request to another?


On Mon, Nov 12, 2012 at 2:49 AM, Jonathan Ellis jbel...@gmail.com wrote:

 Correct.  Which is one reason there is a separate setting for
 cross-datacenter read repair, by the way.

 On Thu, Nov 8, 2012 at 4:43 PM, sankalp kohli kohlisank...@gmail.com
 wrote:
  Hi,
  Lets say I am reading with consistency TWO and my replication is 3.
 The
  read is eligible for global read repair. It will send a request to get
 data
  from one node and a digest request to two.
  If there is a digest mismatch, what I am reading from the code looks
 like it
  will get the data from all three nodes and do a resolve of the data
 before
  returning to the client.
 
  Is it correct or I am readind the code wrong?
 
  Also if this is correct, look like if the third node is in other DC, the
  read will slow down even when the consistency was TWO?
 
  Thanks,
  Sankalp
 
 



 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of DataStax, the source for professional Cassandra support
 http://www.datastax.com



Re: Read during digest mismatch

2012-11-13 Thread Edward Capriolo
I think the code base does not benefit from having too many different read
code paths. Logically what your suggesting is reasonable, but you have to
consider the case of one being slow to respond.

Then what?

On Tuesday, November 13, 2012, Manu Zhang owenzhang1...@gmail.com wrote:
 If consistency is two, don't we just send data request to one and digest
request to another?

 On Mon, Nov 12, 2012 at 2:49 AM, Jonathan Ellis jbel...@gmail.com wrote:

 Correct.  Which is one reason there is a separate setting for
 cross-datacenter read repair, by the way.

 On Thu, Nov 8, 2012 at 4:43 PM, sankalp kohli kohlisank...@gmail.com
wrote:
  Hi,
  Lets say I am reading with consistency TWO and my replication is
3. The
  read is eligible for global read repair. It will send a request to get
data
  from one node and a digest request to two.
  If there is a digest mismatch, what I am reading from the code looks
like it
  will get the data from all three nodes and do a resolve of the data
before
  returning to the client.
 
  Is it correct or I am readind the code wrong?
 
  Also if this is correct, look like if the third node is in other DC,
the
  read will slow down even when the consistency was TWO?
 
  Thanks,
  Sankalp
 
 



 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of DataStax, the source for professional Cassandra support
 http://www.datastax.com




Re: Read during digest mismatch

2012-11-13 Thread Wz1975
From my understanding,  if CL = 2, one read,  one digest are sent.  Only if it 
is    read repair,  digest is sent to all replicates.


Thanks.
-Wei

Sent from my Samsung smartphone on ATT

 Original message 
Subject: Re: Read during digest mismatch 
From: Edward Capriolo edlinuxg...@gmail.com 
To: user@cassandra.apache.org user@cassandra.apache.org 
CC:  

I think the code base does not benefit from having too many different read code 
paths. Logically what your suggesting is reasonable, but you have to consider 
the case of one being slow to respond. 

Then what?

On Tuesday, November 13, 2012, Manu Zhang owenzhang1...@gmail.com wrote:
 If consistency is two, don't we just send data request to one and digest 
 request to another?

 On Mon, Nov 12, 2012 at 2:49 AM, Jonathan Ellis jbel...@gmail.com wrote:

 Correct.  Which is one reason there is a separate setting for
 cross-datacenter read repair, by the way.

 On Thu, Nov 8, 2012 at 4:43 PM, sankalp kohli kohlisank...@gmail.com wrote:
  Hi,
      Lets say I am reading with consistency TWO and my replication is 3. The
  read is eligible for global read repair. It will send a request to get data
  from one node and a digest request to two.
  If there is a digest mismatch, what I am reading from the code looks like 
  it
  will get the data from all three nodes and do a resolve of the data before
  returning to the client.
 
  Is it correct or I am readind the code wrong?
 
  Also if this is correct, look like if the third node is in other DC, the
  read will slow down even when the consistency was TWO?
 
  Thanks,
  Sankalp
 
 



 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of DataStax, the source for professional Cassandra support
 http://www.datastax.com

 

Re: Read during digest mismatch

2012-11-13 Thread sankalp kohli
Thats correct


On Tue, Nov 13, 2012 at 7:06 PM, Wz1975 wz1...@yahoo.com wrote:

 From my understanding,  if CL = 2, one read,  one digest are sent.  Only
 if it isread repair,  digest is sent to all replicates.


 Thanks.
 -Wei

 Sent from my Samsung smartphone on ATT


  Original message 
 Subject: Re: Read during digest mismatch
 From: Edward Capriolo edlinuxg...@gmail.com
 To: user@cassandra.apache.org user@cassandra.apache.org
 CC:


 I think the code base does not benefit from having too many different read
 code paths. Logically what your suggesting is reasonable, but you have to
 consider the case of one being slow to respond.

 Then what?

 On Tuesday, November 13, 2012, Manu Zhang owenzhang1...@gmail.com wrote:
  If consistency is two, don't we just send data request to one and digest
 request to another?
 
  On Mon, Nov 12, 2012 at 2:49 AM, Jonathan Ellis jbel...@gmail.com
 wrote:
 
  Correct.  Which is one reason there is a separate setting for
  cross-datacenter read repair, by the way.
 
  On Thu, Nov 8, 2012 at 4:43 PM, sankalp kohli kohlisank...@gmail.com
 wrote:
   Hi,
   Lets say I am reading with consistency TWO and my replication is
 3. The
   read is eligible for global read repair. It will send a request to
 get data
   from one node and a digest request to two.
   If there is a digest mismatch, what I am reading from the code looks
 like it
   will get the data from all three nodes and do a resolve of the data
 before
   returning to the client.
  
   Is it correct or I am readind the code wrong?
  
   Also if this is correct, look like if the third node is in other DC,
 the
   read will slow down even when the consistency was TWO?
  
   Thanks,
   Sankalp
  
  
 
 
 
  --
  Jonathan Ellis
  Project Chair, Apache Cassandra
  co-founder of DataStax, the source for professional Cassandra support
  http://www.datastax.com
 
 



Re: Read during digest mismatch

2012-11-11 Thread Jonathan Ellis
Correct.  Which is one reason there is a separate setting for
cross-datacenter read repair, by the way.

On Thu, Nov 8, 2012 at 4:43 PM, sankalp kohli kohlisank...@gmail.com wrote:
 Hi,
 Lets say I am reading with consistency TWO and my replication is 3. The
 read is eligible for global read repair. It will send a request to get data
 from one node and a digest request to two.
 If there is a digest mismatch, what I am reading from the code looks like it
 will get the data from all three nodes and do a resolve of the data before
 returning to the client.

 Is it correct or I am readind the code wrong?

 Also if this is correct, look like if the third node is in other DC, the
 read will slow down even when the consistency was TWO?

 Thanks,
 Sankalp





-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


Read during digest mismatch

2012-11-08 Thread sankalp kohli
Hi,
Lets say I am reading with consistency TWO and my replication is 3. The
read is eligible for global read repair. It will send a request to get data
from one node and a digest request to two.
If there is a digest mismatch, what I am reading from the code looks like
it will get the data from all three nodes and do a resolve of the data
before returning to the client.

Is it correct or I am readind the code wrong?

Also if this is correct, look like if the third node is in other DC, the
read will slow down even when the consistency was TWO?

Thanks,
Sankalp


Re: Failed to solve Digest mismatch

2012-07-04 Thread aaron morton
Jason, 
Are you able document the steps to reproduce this on a clean install ? 

Is so do you have time to create an issue on 
https://issues.apache.org/jira/browse/CASSANDRA

Thanks


-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 2/07/2012, at 1:49 AM, Jason Tang wrote:

 For the create/update/deleteColumn/deleteRow test case, for Quorum 
 consistency level, 6 nodes, replicate factor 3, for one thread around 1/100 
 round, I can have this reproduced.
 
 And if I have 20 client threads to run the test client, the ratio is bigger.
 
 And the test group will be executed by one thread, and the client time stamp 
 is unique and sequenced, guaranteed by Hector.
 
 And client only access the data from local Cassandra.
 
 And the query only use the row key which is unique. The column name is not 
 unique, in my case, eg, status.
 
 And the row have around 7 columns, which are all not big, eg status:true, 
 userName:Jason ...
 
 BRs
 //Ares
 
 2012/7/1 Jonathan Ellis jbel...@gmail.com
 Is this Cassandra 1.1.1?
 
 How often do you observe this?  How many columns are in the row?  Can
 you reproduce when querying by column name, or only when slicing the
 row?
 
 On Thu, Jun 28, 2012 at 7:24 AM, Jason Tang ares.t...@gmail.com wrote:
  Hi
 
 First I delete one column, then I delete one row. Then try to read all
  columns from the same row, all operations from same client app.
 
 The consistency level is read/write quorum.
 
 Check the Cassandra log, the local node don't perform the delete
  operation but send the mutation to other nodes (192.168.0.6, 192.168.0.1)
 
 After delete, I try to read all columns from the row, I found the node
  found Digest mismatch due to Quorum consistency configuration, but the
  result is not correct.
 
 From the log, I can see the delete mutation already accepted
  by 192.168.0.6, 192.168.0.1,  but when 192.168.0.5 read response from 0.6
  and 0.1, and then it merge the data, but finally 0.5 shows the result which
  is the dirty data.
 
 Following logs shows the change of column 737461747573 , 192.168.0.5
  try to read from 0.1 and 0.6, it should be deleted, but finally it shows it
  has the data.
 
  log:
  192.168.0.5
  DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 653)
  Command/ConsistencyLevel is SliceByNamesReadCommand(table='drc',
  key=7878323239537570657254616e67307878,
  columnParent='QueryPath(columnFamilyName='queue', superColumnName='null',
  columnName='null')',
  columns=[6578656375746554696d65,6669726554696d65,67726f75705f6964,696e517565756554696d65,6c6f67526f6f744964,6d6f54797065,706172746974696f6e,7265636569766554696d65,72657175657374,7265747279,7365727669636550726f7669646572,737461747573,757365724e616d65,])/QUORUM
  DEBUG [Thrift:17] 2012-06-28 15:59:42,198 ReadCallback.java (line 79)
  Blockfor is 2; setting up requests to /192.168.0.6,/192.168.0.1
  DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 674)
  reading data from /192.168.0.6
  DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 694)
  reading digest from /192.168.0.1
  DEBUG [RequestResponseStage:2] 2012-06-28 15:59:42,199
  ResponseVerbHandler.java (line 44) Processing response on a callback from
  6556@/192.168.0.6
  DEBUG [RequestResponseStage:2] 2012-06-28 15:59:42,199
  AbstractRowResolver.java (line 66) Preprocessed data response
  DEBUG [RequestResponseStage:6] 2012-06-28 15:59:42,199
  ResponseVerbHandler.java (line 44) Processing response on a callback from
  6557@/192.168.0.1
  DEBUG [RequestResponseStage:6] 2012-06-28 15:59:42,199
  AbstractRowResolver.java (line 66) Preprocessed digest response
  DEBUG [Thrift:17] 2012-06-28 15:59:42,199 RowDigestResolver.java (line 65)
  resolving 2 responses
  DEBUG [Thrift:17] 2012-06-28 15:59:42,200 StorageProxy.java (line 733)
  Digest mismatch: org.apache.cassandra.service.DigestMismatchException:
  Mismatch for key DecoratedKey(100572974179274741747356988451225858264,
  7878323239537570657254616e67307878) (b725ab25696111be49aaa7c4b7afa52d vs
  d41d8cd98f00b204e9800998ecf8427e)
  DEBUG [RequestResponseStage:9] 2012-06-28 15:59:42,201
  ResponseVerbHandler.java (line 44) Processing response on a callback from
  6558@/192.168.0.6
  DEBUG [RequestResponseStage:7] 2012-06-28 15:59:42,201
  ResponseVerbHandler.java (line 44) Processing response on a callback from
  6559@/192.168.0.1
  DEBUG [RequestResponseStage:9] 2012-06-28 15:59:42,201
  AbstractRowResolver.java (line 66) Preprocessed data response
  DEBUG [RequestResponseStage:7] 2012-06-28 15:59:42,201
  AbstractRowResolver.java (line 66) Preprocessed data response
  DEBUG [Thrift:17] 2012-06-28 15:59:42,201 RowRepairResolver.java (line 63)
  resolving 2 responses
  DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line 123)
  collecting 0 of 2147483647: 6669726554696d65:false:13@1340870382109004
  DEBUG [Thrift:17] 2012-06-28 15:59:42,201

Re: Failed to solve Digest mismatch

2012-07-01 Thread Jonathan Ellis
Is this Cassandra 1.1.1?

How often do you observe this?  How many columns are in the row?  Can
you reproduce when querying by column name, or only when slicing the
row?

On Thu, Jun 28, 2012 at 7:24 AM, Jason Tang ares.t...@gmail.com wrote:
 Hi

    First I delete one column, then I delete one row. Then try to read all
 columns from the same row, all operations from same client app.

    The consistency level is read/write quorum.

    Check the Cassandra log, the local node don't perform the delete
 operation but send the mutation to other nodes (192.168.0.6, 192.168.0.1)

    After delete, I try to read all columns from the row, I found the node
 found Digest mismatch due to Quorum consistency configuration, but the
 result is not correct.

    From the log, I can see the delete mutation already accepted
 by 192.168.0.6, 192.168.0.1,  but when 192.168.0.5 read response from 0.6
 and 0.1, and then it merge the data, but finally 0.5 shows the result which
 is the dirty data.

    Following logs shows the change of column 737461747573 , 192.168.0.5
 try to read from 0.1 and 0.6, it should be deleted, but finally it shows it
 has the data.

 log:
 192.168.0.5
 DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 653)
 Command/ConsistencyLevel is SliceByNamesReadCommand(table='drc',
 key=7878323239537570657254616e67307878,
 columnParent='QueryPath(columnFamilyName='queue', superColumnName='null',
 columnName='null')',
 columns=[6578656375746554696d65,6669726554696d65,67726f75705f6964,696e517565756554696d65,6c6f67526f6f744964,6d6f54797065,706172746974696f6e,7265636569766554696d65,72657175657374,7265747279,7365727669636550726f7669646572,737461747573,757365724e616d65,])/QUORUM
 DEBUG [Thrift:17] 2012-06-28 15:59:42,198 ReadCallback.java (line 79)
 Blockfor is 2; setting up requests to /192.168.0.6,/192.168.0.1
 DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 674)
 reading data from /192.168.0.6
 DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 694)
 reading digest from /192.168.0.1
 DEBUG [RequestResponseStage:2] 2012-06-28 15:59:42,199
 ResponseVerbHandler.java (line 44) Processing response on a callback from
 6556@/192.168.0.6
 DEBUG [RequestResponseStage:2] 2012-06-28 15:59:42,199
 AbstractRowResolver.java (line 66) Preprocessed data response
 DEBUG [RequestResponseStage:6] 2012-06-28 15:59:42,199
 ResponseVerbHandler.java (line 44) Processing response on a callback from
 6557@/192.168.0.1
 DEBUG [RequestResponseStage:6] 2012-06-28 15:59:42,199
 AbstractRowResolver.java (line 66) Preprocessed digest response
 DEBUG [Thrift:17] 2012-06-28 15:59:42,199 RowDigestResolver.java (line 65)
 resolving 2 responses
 DEBUG [Thrift:17] 2012-06-28 15:59:42,200 StorageProxy.java (line 733)
 Digest mismatch: org.apache.cassandra.service.DigestMismatchException:
 Mismatch for key DecoratedKey(100572974179274741747356988451225858264,
 7878323239537570657254616e67307878) (b725ab25696111be49aaa7c4b7afa52d vs
 d41d8cd98f00b204e9800998ecf8427e)
 DEBUG [RequestResponseStage:9] 2012-06-28 15:59:42,201
 ResponseVerbHandler.java (line 44) Processing response on a callback from
 6558@/192.168.0.6
 DEBUG [RequestResponseStage:7] 2012-06-28 15:59:42,201
 ResponseVerbHandler.java (line 44) Processing response on a callback from
 6559@/192.168.0.1
 DEBUG [RequestResponseStage:9] 2012-06-28 15:59:42,201
 AbstractRowResolver.java (line 66) Preprocessed data response
 DEBUG [RequestResponseStage:7] 2012-06-28 15:59:42,201
 AbstractRowResolver.java (line 66) Preprocessed data response
 DEBUG [Thrift:17] 2012-06-28 15:59:42,201 RowRepairResolver.java (line 63)
 resolving 2 responses
 DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line 123)
 collecting 0 of 2147483647: 6669726554696d65:false:13@1340870382109004
 DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line 123)
 collecting 1 of 2147483647: 67726f75705f6964:false:10@1340870382109014
 DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line 123)
 collecting 2 of 2147483647: 696e517565756554696d65:false:13@1340870382109005
 DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line 123)
 collecting 3 of 2147483647: 6c6f67526f6f744964:false:7@1340870382109015
 DEBUG [Thrift:17] 2012-06-28 15:59:42,202 SliceQueryFilter.java (line 123)
 collecting 4 of 2147483647: 6d6f54797065:false:6@1340870382109009
 DEBUG [Thrift:17] 2012-06-28 15:59:42,202 SliceQueryFilter.java (line 123)
 collecting 5 of 2147483647: 706172746974696f6e:false:2@1340870382109001
 DEBUG [Thrift:17] 2012-06-28 15:59:42,202 SliceQueryFilter.java (line 123)
 collecting 6 of 2147483647: 7265636569766554696d65:false:13@1340870382109003
 DEBUG [Thrift:17] 2012-06-28 15:59:42,202 SliceQueryFilter.java (line 123)
 collecting 7 of 2147483647: 72657175657374:false:300@1340870382109013
 DEBUG [RequestResponseStage:5] 2012-06-28 15:59:42,202
 ResponseVerbHandler.java (line 44) Processing response on a callback from
 6552

Re: Failed to solve Digest mismatch

2012-07-01 Thread Jason Tang
For the create/update/deleteColumn/deleteRow test case, for Quorum
consistency level, 6 nodes, replicate factor 3, for one thread around 1/100
round, I can have this reproduced.

And if I have 20 client threads to run the test client, the ratio is bigger.

And the test group will be executed by one thread, and the client time
stamp is unique and sequenced, guaranteed by Hector.

And client only access the data from local Cassandra.

And the query only use the row key which is unique. The column name is not
unique, in my case, eg, status.

And the row have around 7 columns, which are all not big, eg status:true,
userName:Jason ...

BRs
//Ares

2012/7/1 Jonathan Ellis jbel...@gmail.com

 Is this Cassandra 1.1.1?

 How often do you observe this?  How many columns are in the row?  Can
 you reproduce when querying by column name, or only when slicing the
 row?

 On Thu, Jun 28, 2012 at 7:24 AM, Jason Tang ares.t...@gmail.com wrote:
  Hi
 
 First I delete one column, then I delete one row. Then try to read all
  columns from the same row, all operations from same client app.
 
 The consistency level is read/write quorum.
 
 Check the Cassandra log, the local node don't perform the delete
  operation but send the mutation to other nodes (192.168.0.6, 192.168.0.1)
 
 After delete, I try to read all columns from the row, I found the node
  found Digest mismatch due to Quorum consistency configuration, but the
  result is not correct.
 
 From the log, I can see the delete mutation already accepted
  by 192.168.0.6, 192.168.0.1,  but when 192.168.0.5 read response from 0.6
  and 0.1, and then it merge the data, but finally 0.5 shows the result
 which
  is the dirty data.
 
 Following logs shows the change of column 737461747573 , 192.168.0.5
  try to read from 0.1 and 0.6, it should be deleted, but finally it shows
 it
  has the data.
 
  log:
  192.168.0.5
  DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 653)
  Command/ConsistencyLevel is SliceByNamesReadCommand(table='drc',
  key=7878323239537570657254616e67307878,
  columnParent='QueryPath(columnFamilyName='queue', superColumnName='null',
  columnName='null')',
 
 columns=[6578656375746554696d65,6669726554696d65,67726f75705f6964,696e517565756554696d65,6c6f67526f6f744964,6d6f54797065,706172746974696f6e,7265636569766554696d65,72657175657374,7265747279,7365727669636550726f7669646572,737461747573,757365724e616d65,])/QUORUM
  DEBUG [Thrift:17] 2012-06-28 15:59:42,198 ReadCallback.java (line 79)
  Blockfor is 2; setting up requests to /192.168.0.6,/192.168.0.1
  DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 674)
  reading data from /192.168.0.6
  DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 694)
  reading digest from /192.168.0.1
  DEBUG [RequestResponseStage:2] 2012-06-28 15:59:42,199
  ResponseVerbHandler.java (line 44) Processing response on a callback from
  6556@/192.168.0.6
  DEBUG [RequestResponseStage:2] 2012-06-28 15:59:42,199
  AbstractRowResolver.java (line 66) Preprocessed data response
  DEBUG [RequestResponseStage:6] 2012-06-28 15:59:42,199
  ResponseVerbHandler.java (line 44) Processing response on a callback from
  6557@/192.168.0.1
  DEBUG [RequestResponseStage:6] 2012-06-28 15:59:42,199
  AbstractRowResolver.java (line 66) Preprocessed digest response
  DEBUG [Thrift:17] 2012-06-28 15:59:42,199 RowDigestResolver.java (line
 65)
  resolving 2 responses
  DEBUG [Thrift:17] 2012-06-28 15:59:42,200 StorageProxy.java (line 733)
  Digest mismatch: org.apache.cassandra.service.DigestMismatchException:
  Mismatch for key DecoratedKey(100572974179274741747356988451225858264,
  7878323239537570657254616e67307878) (b725ab25696111be49aaa7c4b7afa52d vs
  d41d8cd98f00b204e9800998ecf8427e)
  DEBUG [RequestResponseStage:9] 2012-06-28 15:59:42,201
  ResponseVerbHandler.java (line 44) Processing response on a callback from
  6558@/192.168.0.6
  DEBUG [RequestResponseStage:7] 2012-06-28 15:59:42,201
  ResponseVerbHandler.java (line 44) Processing response on a callback from
  6559@/192.168.0.1
  DEBUG [RequestResponseStage:9] 2012-06-28 15:59:42,201
  AbstractRowResolver.java (line 66) Preprocessed data response
  DEBUG [RequestResponseStage:7] 2012-06-28 15:59:42,201
  AbstractRowResolver.java (line 66) Preprocessed data response
  DEBUG [Thrift:17] 2012-06-28 15:59:42,201 RowRepairResolver.java (line
 63)
  resolving 2 responses
  DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line
 123)
  collecting 0 of 2147483647: 6669726554696d65:false:13@1340870382109004
  DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line
 123)
  collecting 1 of 2147483647: 67726f75705f6964:false:10@1340870382109014
  DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line
 123)
  collecting 2 of 2147483647:
 696e517565756554696d65:false:13@1340870382109005
  DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line
 123)
  collecting 3 of 2147483647

Failed to solve Digest mismatch

2012-06-28 Thread Jason Tang
Hi

   First I delete one column, then I delete one row. Then try to read all
columns from the same row, all operations from same client app.

   The consistency level is read/write quorum.

   Check the Cassandra log, the local node don't perform the delete
operation but send the mutation to other nodes (192.168.0.6, 192.168.0.1)

   After delete, I try to read all columns from the row, I found the node
found Digest mismatch due to Quorum consistency configuration, but the
result is not correct.

   From the log, I can see the delete mutation already accepted
by 192.168.0.6, 192.168.0.1,  but when 192.168.0.5 read response from 0.6
and 0.1, and then it merge the data, but finally 0.5 shows the result which
is the dirty data.

   Following logs shows the change of column 737461747573 , 192.168.0.5
try to read from 0.1 and 0.6, it should be deleted, but finally it shows it
has the data.

log:
192.168.0.5
DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 653)
Command/ConsistencyLevel is SliceByNamesReadCommand(table='drc',
key=7878323239537570657254616e67307878,
columnParent='QueryPath(columnFamilyName='queue', superColumnName='null',
columnName='null')',
columns=[6578656375746554696d65,6669726554696d65,67726f75705f6964,696e517565756554696d65,6c6f67526f6f744964,6d6f54797065,706172746974696f6e,7265636569766554696d65,72657175657374,7265747279,7365727669636550726f7669646572,
737461747573,757365724e616d65,])/QUORUM
DEBUG [Thrift:17] 2012-06-28 15:59:42,198 ReadCallback.java (line 79)
Blockfor is 2; setting up requests to /192.168.0.6,/192.168.0.1
DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 674)
reading data from /192.168.0.6
DEBUG [Thrift:17] 2012-06-28 15:59:42,198 StorageProxy.java (line 694)
reading digest from /192.168.0.1
DEBUG [RequestResponseStage:2] 2012-06-28 15:59:42,199
ResponseVerbHandler.java (line 44) Processing response on a callback from
6556@/192.168.0.6
DEBUG [RequestResponseStage:2] 2012-06-28 15:59:42,199
AbstractRowResolver.java (line 66) Preprocessed data response
DEBUG [RequestResponseStage:6] 2012-06-28 15:59:42,199
ResponseVerbHandler.java (line 44) Processing response on a callback from
6557@/192.168.0.1
DEBUG [RequestResponseStage:6] 2012-06-28 15:59:42,199
AbstractRowResolver.java (line 66) Preprocessed digest response
DEBUG [Thrift:17] 2012-06-28 15:59:42,199 RowDigestResolver.java (line 65)
resolving 2 responses
DEBUG [Thrift:17] 2012-06-28 15:59:42,200 StorageProxy.java (line 733)
Digest mismatch: org.apache.cassandra.service.DigestMismatchException:
Mismatch for key DecoratedKey(100572974179274741747356988451225858264,
7878323239537570657254616e67307878) (b725ab25696111be49aaa7c4b7afa52d vs
d41d8cd98f00b204e9800998ecf8427e)
DEBUG [RequestResponseStage:9] 2012-06-28 15:59:42,201
ResponseVerbHandler.java (line 44) Processing response on a callback from
6558@/192.168.0.6
DEBUG [RequestResponseStage:7] 2012-06-28 15:59:42,201
ResponseVerbHandler.java (line 44) Processing response on a callback from
6559@/192.168.0.1
DEBUG [RequestResponseStage:9] 2012-06-28 15:59:42,201
AbstractRowResolver.java (line 66) Preprocessed data response
DEBUG [RequestResponseStage:7] 2012-06-28 15:59:42,201
AbstractRowResolver.java (line 66) Preprocessed data response
DEBUG [Thrift:17] 2012-06-28 15:59:42,201 RowRepairResolver.java (line 63)
resolving 2 responses
DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line 123)
collecting 0 of 2147483647: 6669726554696d65:false:13@1340870382109004
DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line 123)
collecting 1 of 2147483647: 67726f75705f6964:false:10@1340870382109014
DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line 123)
collecting 2 of 2147483647: 696e517565756554696d65:false:13@1340870382109005
DEBUG [Thrift:17] 2012-06-28 15:59:42,201 SliceQueryFilter.java (line 123)
collecting 3 of 2147483647: 6c6f67526f6f744964:false:7@1340870382109015
DEBUG [Thrift:17] 2012-06-28 15:59:42,202 SliceQueryFilter.java (line 123)
collecting 4 of 2147483647: 6d6f54797065:false:6@1340870382109009
DEBUG [Thrift:17] 2012-06-28 15:59:42,202 SliceQueryFilter.java (line 123)
collecting 5 of 2147483647: 706172746974696f6e:false:2@1340870382109001
DEBUG [Thrift:17] 2012-06-28 15:59:42,202 SliceQueryFilter.java (line 123)
collecting 6 of 2147483647: 7265636569766554696d65:false:13@1340870382109003
DEBUG [Thrift:17] 2012-06-28 15:59:42,202 SliceQueryFilter.java (line 123)
collecting 7 of 2147483647: 72657175657374:false:300@1340870382109013
DEBUG [RequestResponseStage:5] 2012-06-28 15:59:42,202
ResponseVerbHandler.java (line 44) Processing response on a callback from
6552@/192.168.0.1
DEBUG [Thrift:17] 2012-06-28 15:59:42,202 SliceQueryFilter.java (line 123)
collecting 8 of 2147483647: 7265747279:false:1@1340870382109006
DEBUG [Thrift:17] 2012-06-28 15:59:42,202 SliceQueryFilter.java (line 123)
collecting 9 of 2147483647:
7365727669636550726f7669646572:false:4@1340870382109007
DEBUG [Thrift:17