[jira] [Commented] (CASSANDRA-14929) twcs sstables gets merged following node removal

2018-12-17 Thread Varun Barala (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16723708#comment-16723708
 ] 

Varun Barala commented on CASSANDRA-14929:
--

1) SSTables timestamp is not judged based on the time when SSTable was flushed 
instead it's judged based on inside partitions livenessInfo.
[https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java#L318]
[https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/metadata/StatsMetadata.java]

2) I tried to reproduce on local and found that it only merged the SStables 
which are having identical partitions and SSTable came from the replicated node.
[https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L314]

Logs on local:-
{code:java}
DEBUG [CompactionExecutor:8] 2018-12-18 13:00:00,952 
TimeWindowCompactionStrategy.java:299 - bucket size 2 >= 2 and not in current 
bucket, compacting what's here: 
[BigTableReader(path='/home/barala/.ccm/twcs/node2/data0/ks2/t1-cbfc3540027a11e9b8242167b8f53585/mc-4-big-Data.db'),
 
BigTableReader(path='/home/barala/.ccm/twcs/node2/data0/ks2/t1-cbfc3540027a11e9b8242167b8f53585/mc-10-big-Data.db')]
DEBUG [CompactionExecutor:8] 2018-12-18 13:00:00,953 CompactionTask.java:158 - 
Compacting (c61ff290-0281-11e9-985b-b3987cee6cf4) 
[/home/barala/.ccm/twcs/node2/data0/ks2/t1-cbfc3540027a11e9b8242167b8f53585/mc-4-big-Data.db:level=0,
 
/home/barala/.ccm/twcs/node2/data0/ks2/t1-cbfc3540027a11e9b8242167b8f53585/mc-10-big-Data.db:level=0,
 ]{code}
*mc-4* was on the local node and *mc-10* came from a replica node. Both will 
fall under in the same bucket since both are having identical partitions.


{code:java}
DEBUG [NonPeriodicTasks:1] 2018-12-18 13:00:00,991 SSTable.java:107 - Deleting 
sstable: 
/home/barala/.ccm/twcs/node2/data0/ks2/t1-cbfc3540027a11e9b8242167b8f53585/mc-4-big
DEBUG [CompactionExecutor:8] 2018-12-18 13:00:00,991 CompactionTask.java:235 - 
Compacted (c61ff290-0281-11e9-985b-b3987cee6cf4) 2 sstables to 
[/home/barala/.ccm/twcs/node2/data0/ks2/t1-cbfc3540027a11e9b8242167b8f53585/mc-11-big,]
 to level=0. 125 bytes to 75 (~60% of original) in 37ms = 0.001933MB/s. 4 total 
partitions merged to 3. Partition merge counts were {1:2, 2:1, }{code}

IMO this is the expected behavior.

[~jjirsa] Could you please check once? If looks good then can be destructed!! 
Thank You!

 

> twcs sstables gets merged following node removal
> 
>
> Key: CASSANDRA-14929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: cassandra 3.0.17
>Reporter: Gil Ganz
>Priority: Major
>  Labels: cassandra, compaction, twcs
>
> after removing a node from the cluster, a table that is defined as twcs, has 
> sstables from different time windows merged together, making old and new data 
> sit in the same sstable.
> CREATE KEYSPACE gil_test WITH replication = \{'class': 
> 'NetworkTopologyStrategy', 'DC1': '2'} AND durable_writes = true;
> CREATE TABLE gil_test.my_test (
>  id int,
>  creation_time timestamp,
>  name text,
>  PRIMARY KEY (id, creation_time)
> ) WITH CLUSTERING ORDER BY (creation_time ASC)
>  AND bloom_filter_fp_chance = 0.01
>  AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'}
>  AND comment = ''
>  AND compaction = \{'compaction_window_unit': 'HOURS', 
> 'compaction_window_size': '2', 'class': 'TimeWindowCompactionStrategy'}
>  AND compression = \{'chunk_length_kb': '4', 'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>  AND dclocal_read_repair_chance = 0.0
>  AND default_time_to_live = 0
>  AND gc_grace_seconds = 3600
>  AND max_index_interval = 2048
>  AND memtable_flush_period_in_ms = 0
>  AND min_index_interval = 128
>  AND read_repair_chance = 0.0
>  AND speculative_retry = 'NONE';
>  
> 3 nodes cluster
> before removing node number 3 - directory listing
> drwxr-xr-x 2 cassandra cassandra 4096 Dec 10 20:28 backups
> -rw-r--r-- 1 cassandra cassandra 51 Dec 10 22:10 mc-16-big-CompressionInfo.db
> -rw-r--r-- 1 cassandra cassandra 2044 Dec 10 22:10 mc-16-big-Data.db
> -rw-r--r-- 1 cassandra cassandra 9 Dec 10 22:10 mc-16-big-Digest.crc32
> -rw-r--r-- 1 cassandra cassandra 64 Dec 10 22:10 mc-16-big-Filter.db
> -rw-r--r-- 1 cassandra cassandra 375 Dec 10 22:10 mc-16-big-Index.db
> -rw-r--r-- 1 cassandra cassandra 4805 Dec 10 22:10 mc-16-big-Statistics.db
> -rw-r--r-- 1 cassandra cassandra 56 Dec 10 22:10 mc-16-big-Summary.db
> -rw-r--r-- 1 cassandra cassandra 92 Dec 10 22:10 mc-16-big-TOC.txt
> -rw-r--r-- 1 cassandra cassandra 51 Dec 11 00:00 mc-31-big-CompressionInfo.db
> -rw-r--r-- 1 cassandra cassandra 

[jira] [Comment Edited] (CASSANDRA-14702) Cassandra Write failed even when the required nodes to Ack(consistency) are up.

2018-12-16 Thread Varun Barala (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722455#comment-16722455
 ] 

Varun Barala edited comment on CASSANDRA-14702 at 12/16/18 5:04 PM:


> We get the writetimeout exception from cassandra even when 2 nodes are up
In 5 node cluster with RF 3, you can not expect that every query will get 
executed successfully!

Because:-
Let's say you have 5 nodes with V Node 1 in each:-
Ring will look like:-
{code:java}
-9223372036854775808   1
-5534023222112865485   2
-1844674407370955162   3
1844674407370955161    4
5534023222112865484    5{code}
Now let's say you have keyspace:-
{code:java}
CREATE KEYSPACE ks1
WITH durable_writes = true
AND replication = {
    'class' : 'SimpleStrategy',
    'replication_factor' : 3
};
{code}
table structure:-
{code:java}
CREATE TABLE ks1.table1 (
    id boolean,
    pk1 boolean,
    pk2 boolean,
    ck1 int,
    PRIMARY KEY ((id,pk1,pk2))
);{code}
Let's insert two statements:-
{code:java}
insert into ks1.table1 (id, pk1, pk2, ck1) VALUES (true, true, true, 1);

insert into ks1.table1 (id, pk1, pk2, ck1) VALUES (false, true, true, 1);{code}
Let's see ring token for the above partitions:-
{code:java}
select token(id,pk1,pk2) from ks1.table1;{code}
It'll return result:-
{code:java}
-3439815377359905503 this belongs to node2 and should have replica node3 and 
node4
 6885159420904076627 this belongs to node5 and should have replica node1 and 
node2
{code}
Let's say you are updating above two partitions using batch statement and node2 
and node5 are only up in the cluster.
{code:java}
As you can see query on node5 will be successful because node2 and node5 are 
satisfying quorum but query for node2 will fail since all the replicas (node3, 
node4) are down.{code}
Exception with only Node2 and Node5 up:-
{code:java}
Caused by: com.datastax.driver.core.exceptions.UnavailableException: Not enough 
replicas available for query at consistency LOCAL_QUORUM (2 required but only 1 
alive)
    at 
com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:128)
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:114)
    at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:504)
    at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1070)
    at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:993)
    at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105){code}
Then I tried to produce write time out scenario by shutting down nodes during 
query execution:-
{code:java}
Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra 
timeout during write query at consistency LOCAL_QUORUM (2 replica were required 
but only 1 acknowledged the write)
    at 
com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:100)
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:122)
    at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:504)
    at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1070)
    at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:993)
    at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105){code}
*cassamdra version was 3.11.2*

[~rohitsngh27] I doubt you were having RF as 5. Can you check above scenarios? 
Let me know If I'm missing anything.

 

 


was (Author: varuna):
> We get the writetimeout exception from cassandra even when 2 nodes are up
In 5 node cluster with RF 3, you can not accept that every query will get 
executed successfully!

Because:-
Let's say you have 5 nodes with V Node 1 in each:-
Ring will look like:-
{code:java}

-9223372036854775808   1
-5534023222112865485   2
-1844674407370955162   3
1844674407370955161    4
5534023222112865484    5{code}

Now let's say you have keyspace:-
{code:java}
CREATE KEYSPACE ks1
WITH durable_writes = true
AND replication = {
    'class' : 'SimpleStrategy',
    'replication_factor' : 3
};
{code}

table structure:-
{code:java}
CREATE TABLE ks1.table1 (
    id boolean,
    pk1 boolean,
    pk2 boolean,
    ck1 int,
    PRIMARY KEY ((id,pk1,pk2))
);{code}

Let's insert two statements:-
{code:java}
insert into ks1.table1 (id, pk1, pk2, ck1) VALUES (true, true, true, 1);

insert into ks1.table1 (id, pk1, pk2, ck1) VALUES (false, true, true, 1);{code}

Let's see ring token for the above partitions:-
{code:java}
select token(id,pk1,pk2) from ks1.table1;{code}

It'll return result:-
{code:java}
-3439815377359905503 this belongs to node2 and should have replica node3 and 
node4
 6885159420904076627 this belongs to node5 and should have replica node1 and 
node2
{code}

Let's say you 

[jira] [Commented] (CASSANDRA-14702) Cassandra Write failed even when the required nodes to Ack(consistency) are up.

2018-12-16 Thread Varun Barala (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722455#comment-16722455
 ] 

Varun Barala commented on CASSANDRA-14702:
--

> We get the writetimeout exception from cassandra even when 2 nodes are up
In 5 node cluster with RF 3, you can not accept that every query will get 
executed successfully!

Because:-
Let's say you have 5 nodes with V Node 1 in each:-
Ring will look like:-
{code:java}

-9223372036854775808   1
-5534023222112865485   2
-1844674407370955162   3
1844674407370955161    4
5534023222112865484    5{code}

Now let's say you have keyspace:-
{code:java}
CREATE KEYSPACE ks1
WITH durable_writes = true
AND replication = {
    'class' : 'SimpleStrategy',
    'replication_factor' : 3
};
{code}

table structure:-
{code:java}
CREATE TABLE ks1.table1 (
    id boolean,
    pk1 boolean,
    pk2 boolean,
    ck1 int,
    PRIMARY KEY ((id,pk1,pk2))
);{code}

Let's insert two statements:-
{code:java}
insert into ks1.table1 (id, pk1, pk2, ck1) VALUES (true, true, true, 1);

insert into ks1.table1 (id, pk1, pk2, ck1) VALUES (false, true, true, 1);{code}

Let's see ring token for the above partitions:-
{code:java}
select token(id,pk1,pk2) from ks1.table1;{code}

It'll return result:-
{code:java}
-3439815377359905503 this belongs to node2 and should have replica node3 and 
node4
 6885159420904076627 this belongs to node5 and should have replica node1 and 
node2
{code}

Let's say you are updating above two partitions using batch statement and node2 
and node5 are only up in the cluster.
{code:java}
As you can see query on node5 will be successful because node2 and node5 are 
satisfying quorum but query for node2 will fail since all the replicas (node3, 
node4) are down.{code}

Exception with only Node2 and Node5 up:-
{code:java}
Caused by: com.datastax.driver.core.exceptions.UnavailableException: Not enough 
replicas available for query at consistency LOCAL_QUORUM (2 required but only 1 
alive)
    at 
com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:128)
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:114)
    at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:504)
    at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1070)
    at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:993)
    at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105){code}

Then I tried to produce write time out scenario by shutting down nodes during 
query execution:-
{code:java}
Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra 
timeout during write query at consistency LOCAL_QUORUM (2 replica were required 
but only 1 acknowledged the write)
    at 
com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:100)
    at com.datastax.driver.core.Responses$Error.asException(Responses.java:122)
    at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:504)
    at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1070)
    at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:993)
    at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105){code}

*cassamdra version was 3.11.2*

[~rohitsngh27] I doubt you were having RF as 5. Can you check above scenarios? 
Let me know If I'm missing anything.

 

 

> Cassandra Write failed even when the required nodes to Ack(consistency) are 
> up.
> ---
>
> Key: CASSANDRA-14702
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14702
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Rohit Singh
>Priority: Blocker
>
> Hi,
> We have following configuration in our project for cassandra. 
> Total nodes in Cluster-5
> Replication Factor- 3
> Consistency- LOCAL_QUORUM
> We get the writetimeout exception from cassandra even when 2 nodes are up and 
> why does stack trace says that 3 replica were required when consistency is 2?
> Below is the exception we got:-
> com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout 
> during write query at consistency LOCAL_QUORUM (3 replica were required but 
> only 2 acknowledged the write)
>  at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:59)
>  at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37)
>  at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:289)
>  at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:269)
>  at 
> 

[jira] [Updated] (CASSANDRA-14752) serializers/BooleanSerializer.java is using static bytebuffers which may cause problem for subsequent operations

2018-12-12 Thread Varun Barala (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-14752:
-
Description: 
[https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L26]
 It has two static Bytebuffer variables:-
{code:java}
private static final ByteBuffer TRUE = ByteBuffer.wrap(new byte[]{1});
private static final ByteBuffer FALSE = ByteBuffer.wrap(new byte[]{0});{code}
What will happen if the position of these Bytebuffers is being changed by some 
other operations? It'll affect other subsequent operations. -IMO Using static 
is not a good idea here.-

A potential place where it can become problematic: 
[https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java#L243]
 Since we are calling *`.remaining()`* It may give wrong results _i.e 0_ if 
these Bytebuffers have been used previously.

Solution: 
 
[https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L42]
 Every time we return new bytebuffer object. Please do let me know If there is 
a better way. I'd like to contribute. Thanks!!
{code:java}
public ByteBuffer serialize(Boolean value)
{
return (value == null) ? ByteBufferUtil.EMPTY_BYTE_BUFFER
: value ? ByteBuffer.wrap(new byte[] {1}) : ByteBuffer.wrap(new byte[] {0}); // 
false
}
{code}

  was:
[https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L26]
 It has two static Bytebuffer variables:-
{code:java}
private static final ByteBuffer TRUE = ByteBuffer.wrap(new byte[]{1});
private static final ByteBuffer FALSE = ByteBuffer.wrap(new byte[]{0});{code}
What will happen if the position of these Bytebuffers is being changed by some 
other operations? It'll affect other subsequent operations. IMO Using static is 
not a good idea here.

A potential place where it can become problematic: 
[https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java#L243]
 Since we are calling *`.remaining()`* It may give wrong results _i.e 0_ if 
these Bytebuffers have been used previously.

Solution: 
 
[https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L42]
 Every time we return new bytebuffer object. Please do let me know If there is 
a better way. I'd like to contribute. Thanks!!


{code:java}
public ByteBuffer serialize(Boolean value)
{
return (value == null) ? ByteBufferUtil.EMPTY_BYTE_BUFFER
: value ? ByteBuffer.wrap(new byte[] {1}) : ByteBuffer.wrap(new byte[] {0}); // 
false
}
{code}


> serializers/BooleanSerializer.java is using static bytebuffers which may 
> cause problem for subsequent operations
> 
>
> Key: CASSANDRA-14752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Varun Barala
>Priority: Major
> Fix For: 4.x
>
> Attachments: patch, patch-modified
>
>
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L26]
>  It has two static Bytebuffer variables:-
> {code:java}
> private static final ByteBuffer TRUE = ByteBuffer.wrap(new byte[]{1});
> private static final ByteBuffer FALSE = ByteBuffer.wrap(new byte[]{0});{code}
> What will happen if the position of these Bytebuffers is being changed by 
> some other operations? It'll affect other subsequent operations. -IMO Using 
> static is not a good idea here.-
> A potential place where it can become problematic: 
> [https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java#L243]
>  Since we are calling *`.remaining()`* It may give wrong results _i.e 0_ if 
> these Bytebuffers have been used previously.
> Solution: 
>  
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L42]
>  Every time we return new bytebuffer object. Please do let me know If there 
> is a better way. I'd like to contribute. Thanks!!
> {code:java}
> public ByteBuffer serialize(Boolean value)
> {
> return (value == null) ? ByteBufferUtil.EMPTY_BYTE_BUFFER
> : value ? ByteBuffer.wrap(new byte[] {1}) : ByteBuffer.wrap(new byte[] {0}); 
> // false
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14752) serializers/BooleanSerializer.java is using static bytebuffers which may cause problem for subsequent operations

2018-12-12 Thread Varun Barala (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-14752:
-
Fix Version/s: 4.x
Reproduced In: 4.x
   Status: Patch Available  (was: Open)

 
||MR||
|[trunk\|https://github.com/Barala/cassandra/commits/CASSANDRA-14752-trunk]|

I came up with a different approach where BooleanSerializer's static ByteBuffer 
can be detected using a reference equality check. This approach will avoid new 
object creations of ByteBuffers.

I raised MR for trunk. If it passes the review then I'll raise MR to patch 
other affected versions.

> serializers/BooleanSerializer.java is using static bytebuffers which may 
> cause problem for subsequent operations
> 
>
> Key: CASSANDRA-14752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Varun Barala
>Priority: Major
> Fix For: 4.x
>
> Attachments: patch, patch-modified
>
>
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L26]
>  It has two static Bytebuffer variables:-
> {code:java}
> private static final ByteBuffer TRUE = ByteBuffer.wrap(new byte[]{1});
> private static final ByteBuffer FALSE = ByteBuffer.wrap(new byte[]{0});{code}
> What will happen if the position of these Bytebuffers is being changed by 
> some other operations? It'll affect other subsequent operations. IMO Using 
> static is not a good idea here.
> A potential place where it can become problematic: 
> [https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java#L243]
>  Since we are calling *`.remaining()`* It may give wrong results _i.e 0_ if 
> these Bytebuffers have been used previously.
> Solution: 
>  
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L42]
>  Every time we return new bytebuffer object. Please do let me know If there 
> is a better way. I'd like to contribute. Thanks!!
> {code:java}
> public ByteBuffer serialize(Boolean value)
> {
> return (value == null) ? ByteBufferUtil.EMPTY_BYTE_BUFFER
> : value ? ByteBuffer.wrap(new byte[] {1}) : ByteBuffer.wrap(new byte[] {0}); 
> // false
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14752) serializers/BooleanSerializer.java is using static bytebuffers which may cause problem for subsequent operations

2018-09-17 Thread Varun Barala (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617276#comment-16617276
 ] 

Varun Barala edited comment on CASSANDRA-14752 at 9/17/18 9:32 AM:
---

I found there are too many usages of `AbstractCompositeType#fromString()`. 
One way to corrupt the data:-

Table schema:-
{code:java}
CREATE TABLE ks1.table1 (
t_id boolean,
id boolean,
ck boolean,
nk boolean,
PRIMARY KEY ((t_id,id),ck)
);{code}
Insert statement:-
{code:java}
insert into ks1.table1 (t_id, ck, id, nk)
VALUES (true, false, false, true);
{code}
Now run nodetool command to get the SSTable for given key:-
{code:java}
bin/nodetool getsstables  ks1 table1 "false:true"
{code}
Basically, this operation will modify the positions.

Insert again:-
{code:java}
insert into ks1.table1 (t_id, ck, id, nk)
VALUES (true, true, false, true);
{code}
select data from this table:-
{code:java}
true,false,false,true
null,null,null,null
{code}
So now all boolean type data will be written as null.


was (Author: varuna):
I found there are too many usages of `AbstractCompositeType#fromString()`. 
One way to corrupt the data:-

Table schema:-
{code:java}
CREATE TABLE ks1.table1 (
t_id boolean,
id boolean,
ck boolean,
nk boolean,
PRIMARY KEY ((t_id,id),ck)
);{code}

Insert statement:-
{code:java}
insert into ks1.table1 (tenant_id, ck, id, nk)
VALUES (true, false, false, true);
{code}
Now run nodetool command to get the SSTable for given key:-
{code:java}
bin/nodetool getsstables  ks1 table1 "false:true"
{code}
Basically, this operation will modify the positions.

Insert again:-
{code:java}
insert into ks1.table1 (tenant_id, ck, id, nk)
VALUES (true, true, false, true);
{code}

select data from this table:-
{code:java}
true,false,false,true
null,null,null,null
{code}

So now all boolean type data will be written as null.

> serializers/BooleanSerializer.java is using static bytebuffers which may 
> cause problem for subsequent operations
> 
>
> Key: CASSANDRA-14752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Varun Barala
>Priority: Major
> Attachments: patch, patch-modified
>
>
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L26]
>  It has two static Bytebuffer variables:-
> {code:java}
> private static final ByteBuffer TRUE = ByteBuffer.wrap(new byte[]{1});
> private static final ByteBuffer FALSE = ByteBuffer.wrap(new byte[]{0});{code}
> What will happen if the position of these Bytebuffers is being changed by 
> some other operations? It'll affect other subsequent operations. IMO Using 
> static is not a good idea here.
> A potential place where it can become problematic: 
> [https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java#L243]
>  Since we are calling *`.remaining()`* It may give wrong results _i.e 0_ if 
> these Bytebuffers have been used previously.
> Solution: 
>  
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L42]
>  Every time we return new bytebuffer object. Please do let me know If there 
> is a better way. I'd like to contribute. Thanks!!
> {code:java}
> public ByteBuffer serialize(Boolean value)
> {
> return (value == null) ? ByteBufferUtil.EMPTY_BYTE_BUFFER
> : value ? ByteBuffer.wrap(new byte[] {1}) : ByteBuffer.wrap(new byte[] {0}); 
> // false
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14752) serializers/BooleanSerializer.java is using static bytebuffers which may cause problem for subsequent operations

2018-09-17 Thread Varun Barala (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617276#comment-16617276
 ] 

Varun Barala commented on CASSANDRA-14752:
--

I found there are too many usages of `AbstractCompositeType#fromString()`. 
One way to corrupt the data:-

Table schema:-
{code:java}
CREATE TABLE ks1.table1 (
t_id boolean,
id boolean,
ck boolean,
nk boolean,
PRIMARY KEY ((t_id,id),ck)
);{code}

Insert statement:-
{code:java}
insert into ks1.table1 (tenant_id, ck, id, nk)
VALUES (true, false, false, true);
{code}
Now run nodetool command to get the SSTable for given key:-
{code:java}
bin/nodetool getsstables  ks1 table1 "false:true"
{code}
Basically, this operation will modify the positions.

Insert again:-
{code:java}
insert into ks1.table1 (tenant_id, ck, id, nk)
VALUES (true, true, false, true);
{code}

select data from this table:-
{code:java}
true,false,false,true
null,null,null,null
{code}

So now all boolean type data will be written as null.

> serializers/BooleanSerializer.java is using static bytebuffers which may 
> cause problem for subsequent operations
> 
>
> Key: CASSANDRA-14752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Varun Barala
>Priority: Major
> Attachments: patch, patch-modified
>
>
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L26]
>  It has two static Bytebuffer variables:-
> {code:java}
> private static final ByteBuffer TRUE = ByteBuffer.wrap(new byte[]{1});
> private static final ByteBuffer FALSE = ByteBuffer.wrap(new byte[]{0});{code}
> What will happen if the position of these Bytebuffers is being changed by 
> some other operations? It'll affect other subsequent operations. IMO Using 
> static is not a good idea here.
> A potential place where it can become problematic: 
> [https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java#L243]
>  Since we are calling *`.remaining()`* It may give wrong results _i.e 0_ if 
> these Bytebuffers have been used previously.
> Solution: 
>  
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L42]
>  Every time we return new bytebuffer object. Please do let me know If there 
> is a better way. I'd like to contribute. Thanks!!
> {code:java}
> public ByteBuffer serialize(Boolean value)
> {
> return (value == null) ? ByteBufferUtil.EMPTY_BYTE_BUFFER
> : value ? ByteBuffer.wrap(new byte[] {1}) : ByteBuffer.wrap(new byte[] {0}); 
> // false
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14752) serializers/BooleanSerializer.java is using static bytebuffers which may cause problem for subsequent operations

2018-09-17 Thread Varun Barala (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617225#comment-16617225
 ] 

Varun Barala commented on CASSANDRA-14752:
--

 

[~blerer] Thanks for your reply. In one of our tool, we use below code to 
generate the DecoratedKey from String and In the case of boolean type, we are 
facing this issue.
{code:java}
DatabaseDescriptor.getPartitioner().decorateKey(getKeyValidator(row.getColumnFamily())
.fromString(stringKey));
{code}

[https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java#L255]
 `byteBuffer.put` changes the position. Though it has a comment: *// it's ok to 
consume component as we won't use it anymore.*

 

> serializers/BooleanSerializer.java is using static bytebuffers which may 
> cause problem for subsequent operations
> 
>
> Key: CASSANDRA-14752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Varun Barala
>Priority: Major
> Attachments: patch, patch-modified
>
>
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L26]
>  It has two static Bytebuffer variables:-
> {code:java}
> private static final ByteBuffer TRUE = ByteBuffer.wrap(new byte[]{1});
> private static final ByteBuffer FALSE = ByteBuffer.wrap(new byte[]{0});{code}
> What will happen if the position of these Bytebuffers is being changed by 
> some other operations? It'll affect other subsequent operations. IMO Using 
> static is not a good idea here.
> A potential place where it can become problematic: 
> [https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java#L243]
>  Since we are calling *`.remaining()`* It may give wrong results _i.e 0_ if 
> these Bytebuffers have been used previously.
> Solution: 
>  
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L42]
>  Every time we return new bytebuffer object. Please do let me know If there 
> is a better way. I'd like to contribute. Thanks!!
> {code:java}
> public ByteBuffer serialize(Boolean value)
> {
> return (value == null) ? ByteBufferUtil.EMPTY_BYTE_BUFFER
> : value ? ByteBuffer.wrap(new byte[] {1}) : ByteBuffer.wrap(new byte[] {0}); 
> // false
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14752) serializers/BooleanSerializer.java is using static bytebuffers which may cause problem for subsequent operations

2018-09-16 Thread Varun Barala (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-14752:
-
Attachment: patch-modified

> serializers/BooleanSerializer.java is using static bytebuffers which may 
> cause problem for subsequent operations
> 
>
> Key: CASSANDRA-14752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Varun Barala
>Priority: Major
> Attachments: patch, patch-modified
>
>
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L26]
>  It has two static Bytebuffer variables:-
> {code:java}
> private static final ByteBuffer TRUE = ByteBuffer.wrap(new byte[]{1});
> private static final ByteBuffer FALSE = ByteBuffer.wrap(new byte[]{0});{code}
> What will happen if the position of these Bytebuffers is being changed by 
> some other operations? It'll affect other subsequent operations. IMO Using 
> static is not a good idea here.
> A potential place where it can become problematic: 
> [https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java#L243]
>  Since we are calling *`.remaining()`* It may give wrong results _i.e 0_ if 
> these Bytebuffers have been used previously.
> Solution: 
>  
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L42]
>  Every time we return new bytebuffer object. Please do let me know If there 
> is a better way. I'd like to contribute. Thanks!!
> {code:java}
> public ByteBuffer serialize(Boolean value)
> {
> return (value == null) ? ByteBufferUtil.EMPTY_BYTE_BUFFER
> : value ? ByteBuffer.wrap(new byte[] {1}) : ByteBuffer.wrap(new byte[] {0}); 
> // false
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14752) serializers/BooleanSerializer.java is using static bytebuffers which may cause problem for subsequent operations

2018-09-14 Thread Varun Barala (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-14752:
-
Attachment: patch

> serializers/BooleanSerializer.java is using static bytebuffers which may 
> cause problem for subsequent operations
> 
>
> Key: CASSANDRA-14752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Varun Barala
>Priority: Major
> Attachments: patch
>
>
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L26]
>  It has two static Bytebuffer variables:-
> {code:java}
> private static final ByteBuffer TRUE = ByteBuffer.wrap(new byte[]{1});
> private static final ByteBuffer FALSE = ByteBuffer.wrap(new byte[]{0});{code}
> What will happen if the position of these Bytebuffers is being changed by 
> some other operations? It'll affect other subsequent operations. IMO Using 
> static is not a good idea here.
> A potential place where it can become problematic: 
> [https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java#L243]
>  Since we are calling *`.remaining()`* It may give wrong results _i.e 0_ if 
> these Bytebuffers have been used previously.
> Solution: 
>  
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L42]
>  Every time we return new bytebuffer object. Please do let me know If there 
> is a better way. I'd like to contribute. Thanks!!
> {code:java}
> public ByteBuffer serialize(Boolean value)
> {
> return (value == null) ? ByteBufferUtil.EMPTY_BYTE_BUFFER
> : value ? ByteBuffer.wrap(new byte[] {1}) : ByteBuffer.wrap(new byte[] {0}); 
> // false
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14752) serializers/BooleanSerializer.java is using static bytebuffers which may cause problem for subsequent operations

2018-09-14 Thread Varun Barala (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-14752:
-
Summary: serializers/BooleanSerializer.java is using static bytebuffers 
which may cause problem for subsequent operations  (was: 
serializers/BooleanSerializer.java is using static bytebuffers which may cause 
problem for subsequent oeprations)

> serializers/BooleanSerializer.java is using static bytebuffers which may 
> cause problem for subsequent operations
> 
>
> Key: CASSANDRA-14752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Varun Barala
>Priority: Major
>
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L26]
>  It has two static Bytebuffer variables:-
> {code:java}
> private static final ByteBuffer TRUE = ByteBuffer.wrap(new byte[]{1});
> private static final ByteBuffer FALSE = ByteBuffer.wrap(new byte[]{0});{code}
> What will happen if the position of these Bytebuffers is being changed by 
> some other operations? It'll affect other subsequent operations. IMO Using 
> static is not a good idea here.
> A potential place where it can become problematic: 
> [https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java#L243]
>  Since we are calling *`.remaining()`* It may give wrong results _i.e 0_ if 
> these Bytebuffers have been used previously.
> Solution: 
>  
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L42]
>  Every time we return new bytebuffer object. Please do let me know If there 
> is a better way. I'd like to contribute. Thanks!!
> {code:java}
> public ByteBuffer serialize(Boolean value)
> {
> return (value == null) ? ByteBufferUtil.EMPTY_BYTE_BUFFER
> : value ? ByteBuffer.wrap(new byte[] {1}) : ByteBuffer.wrap(new byte[] {0}); 
> // false
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14752) serializers/BooleanSerializer.java is using static bytebuffers which may cause problem for subsequent oeprations

2018-09-14 Thread Varun Barala (JIRA)
Varun Barala created CASSANDRA-14752:


 Summary: serializers/BooleanSerializer.java is using static 
bytebuffers which may cause problem for subsequent oeprations
 Key: CASSANDRA-14752
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14752
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Varun Barala


[https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L26]
 It has two static Bytebuffer variables:-
{code:java}
private static final ByteBuffer TRUE = ByteBuffer.wrap(new byte[]{1});
private static final ByteBuffer FALSE = ByteBuffer.wrap(new byte[]{0});{code}
What will happen if the position of these Bytebuffers is being changed by some 
other operations? It'll affect other subsequent operations. IMO Using static is 
not a good idea here.

A potential place where it can become problematic: 
[https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java#L243]
 Since we are calling *`.remaining()`* It may give wrong results _i.e 0_ if 
these Bytebuffers have been used previously.

Solution: 
 
[https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/serializers/BooleanSerializer.java#L42]
 Every time we return new bytebuffer object. Please do let me know If there is 
a better way. I'd like to contribute. Thanks!!


{code:java}
public ByteBuffer serialize(Boolean value)
{
return (value == null) ? ByteBufferUtil.EMPTY_BYTE_BUFFER
: value ? ByteBuffer.wrap(new byte[] {1}) : ByteBuffer.wrap(new byte[] {0}); // 
false
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14300) Nodetool upgradesstables erring out with Null assertion error (2.2.5 to 3.11.1)

2018-04-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422029#comment-16422029
 ] 

Varun Barala commented on CASSANDRA-14300:
--

 
[https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/rows/BufferCell.java#L43]
It's expecting no primary key. I doubt you have some corrupt SSTables..


You can modify the source code and can check for which SSTable it's failing. 
[https://github.com/apache/cassandra/blob/cassandra-3.11.1/src/java/org/apache/cassandra/io/sstable/SSTableIdentityIterator.java#L54]
 add logger like `logger.warn("processing sstable {}", sstable.getFilename());`


I'm attaching modified cassandra source code 
[^apache-cassandra-modified-3.11.1.jar]

> Nodetool upgradesstables erring out with Null assertion error (2.2.5 to 
> 3.11.1)
> ---
>
> Key: CASSANDRA-14300
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14300
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Bhanu M. Gandikota
>Priority: Blocker
> Attachments: apache-cassandra-modified-3.11.1.jar
>
>
> -bash-4.2$ nodetool upgradesstables
>  
> WARN  11:28:28,430 Small cdc volume detected at /cdc_raw; setting 
> cdc_total_space_in_mb to 1982.  You can override this in cassandra.yaml
>  
> error: null
> -- StackTrace --
> java.lang.AssertionError
>    at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:43)
>    at 
> org.apache.cassandra.db.LegacyLayout$CellGrouper.addCell(LegacyLayout.java:1242)
>    at 
> org.apache.cassandra.db.LegacyLayout$CellGrouper.addAtom(LegacyLayout.java:1185)
>    at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.readRow(UnfilteredDeserializer.java:495)
>    at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:472)
>    at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:306)
>    at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:176)
>    at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:49)
>    at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.create(SSTableIdentityIterator.java:59)
>    at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:384)
>    at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>    at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:70)
>    at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:122)
>    at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:113)
>    at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:466)
>    at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>    at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:163)
>    at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:92)
>    at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:233)
>    at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:196)
>    at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>    at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85)
>    at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>    at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:428)
>    at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:315)
>    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>    at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>    at java.lang.Thread.run(Thread.java:745)
>  
> -bash-4.2$ 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CASSANDRA-14300) Nodetool upgradesstables erring out with Null assertion error (2.2.5 to 3.11.1)

2018-04-02 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-14300:
-
Attachment: apache-cassandra-modified-3.11.1.jar

> Nodetool upgradesstables erring out with Null assertion error (2.2.5 to 
> 3.11.1)
> ---
>
> Key: CASSANDRA-14300
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14300
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Bhanu M. Gandikota
>Priority: Blocker
> Attachments: apache-cassandra-modified-3.11.1.jar
>
>
> -bash-4.2$ nodetool upgradesstables
>  
> WARN  11:28:28,430 Small cdc volume detected at /cdc_raw; setting 
> cdc_total_space_in_mb to 1982.  You can override this in cassandra.yaml
>  
> error: null
> -- StackTrace --
> java.lang.AssertionError
>    at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:43)
>    at 
> org.apache.cassandra.db.LegacyLayout$CellGrouper.addCell(LegacyLayout.java:1242)
>    at 
> org.apache.cassandra.db.LegacyLayout$CellGrouper.addAtom(LegacyLayout.java:1185)
>    at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.readRow(UnfilteredDeserializer.java:495)
>    at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:472)
>    at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:306)
>    at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:176)
>    at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:49)
>    at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.create(SSTableIdentityIterator.java:59)
>    at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:384)
>    at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>    at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:70)
>    at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:122)
>    at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:113)
>    at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:466)
>    at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
>    at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:163)
>    at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:92)
>    at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:233)
>    at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:196)
>    at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>    at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85)
>    at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>    at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:428)
>    at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:315)
>    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>    at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>    at java.lang.Thread.run(Thread.java:745)
>  
> -bash-4.2$ 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9375) force minumum timeout value

2017-12-07 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281558#comment-16281558
 ] 

Varun Barala commented on CASSANDRA-9375:
-

Ohh yes. Shall I open new ticket for this? I'll fix it. Thanks!!

> force minumum timeout value 
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Varun Barala
>Priority: Trivial
>  Labels: patch
> Fix For: 4.0
>
> Attachments: CASSANDRA-9375.patch, CASSANDRA-9375_after_review, 
> CASSANDRA-9375_after_review_2.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13600) sstabledump possible problem

2017-08-27 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143156#comment-16143156
 ] 

Varun Barala commented on CASSANDRA-13600:
--

[~jjirsa] okay, I'll write unit test case for this. Shall I raise GitHub pull 
request or patch, which one will be more convenient? Thanks!

> sstabledump possible problem
> 
>
> Key: CASSANDRA-13600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13600
> Project: Cassandra
>  Issue Type: Bug
> Environment: Official cassandra docker image (last) under Win10
>Reporter: a8775
>Assignee: Varun Barala
>  Labels: patch
> Fix For: 3.10
>
> Attachments: CASSANDRA-13600.patch
>
>
> h2. Possible bug in sstabledump
> {noformat}
> cqlsh> show version
> [cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native protocol v4]
> {noformat}
> h2. Execute script in cqlsh in new keyspace
> {noformat}
> CREATE TABLE IF NOT EXISTS test_data (   
> // partitioning key
> PK TEXT, 
> // data
> Data TEXT,
> 
> PRIMARY KEY (PK)
> );
> insert into test_data(PK,Data) values('0','');
> insert into test_data(PK,Data) values('1','');
> insert into test_data(PK,Data) values('2','');
> delete from test_data where PK='1';
> insert into test_data(PK,Data) values('1','');
> {noformat}
> h2. Execute the following commands
> {noformat}
> nodetool flush
> nodetool compact
> sstabledump mc-2-big-Data.db
> sstabledump -d mc-2-big-Data.db
> {noformat}
> h3. default dump - missing data for partiotion key = "1"
> {noformat}
> [
>   {
> "partition" : {
>   "key" : [ "0" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 15,
> "liveness_info" : { "tstamp" : "2017-06-14T12:23:13.529389Z" },
> "cells" : [
>   { "name" : "data", "value" : "" }
> ]
>   }
> ]
>   },
>   {
> "partition" : {
>   "key" : [ "2" ],
>   "position" : 26
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 41,
> "liveness_info" : { "tstamp" : "2017-06-14T12:23:13.544132Z" },
> "cells" : [
>   { "name" : "data", "value" : "" }
> ]
>   }
> ]
>   },
>   {
> "partition" : {
>   "key" : [ "1" ],
>   "position" : 53,
>   "deletion_info" : { "marked_deleted" : "2017-06-14T12:23:13.545988Z", 
> "local_delete_time" : "2017-06-14T12:23:13Z" }
> }
>   }
> ]
> {noformat}
> h3. dump with -d option - correct data for partiotion key = "1"
> {noformat}
> [0]@0 Row[info=[ts=1497442993529389] ]:  | [data= ts=1497442993529389]
> [2]@26 Row[info=[ts=1497442993544132] ]:  | [data= ts=1497442993544132]
> [1]@53 deletedAt=1497442993545988, localDeletion=1497442993
> [1]@53 Row[info=[ts=1497442993550159] ]:  | [data= ts=1497442993550159]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13600) sstabledump possible problem

2017-08-27 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13600:
-
   Labels: patch  (was: )
Reproduced In: 3.10
   Status: Patch Available  (was: Open)

sstabledump tool does not handle the cases where a partition goes through 
{{insert, delete, and re-insert}} cycle.

This is fixed in 
https://github.com/apache/cassandra/commit/883c9f0f743139d78996f5faf191508a9be338b5
 for {{trunk}} and {{3.0.11}}

{{partition.staticRow() != null}} instead of null checking, should be checking 
for {{!partition.staticRow().isEmpty()}}


{code:java}
/**
 * The static part corresponding to this partition (this can be an empty
 * row but cannot be {@code null}).
 */
public Row staticRow();
{code}


> sstabledump possible problem
> 
>
> Key: CASSANDRA-13600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13600
> Project: Cassandra
>  Issue Type: Bug
> Environment: Official cassandra docker image (last) under Win10
>Reporter: a8775
>  Labels: patch
> Fix For: 3.10
>
> Attachments: CASSANDRA-13600.patch
>
>
> h2. Possible bug in sstabledump
> {noformat}
> cqlsh> show version
> [cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native protocol v4]
> {noformat}
> h2. Execute script in cqlsh in new keyspace
> {noformat}
> CREATE TABLE IF NOT EXISTS test_data (   
> // partitioning key
> PK TEXT, 
> // data
> Data TEXT,
> 
> PRIMARY KEY (PK)
> );
> insert into test_data(PK,Data) values('0','');
> insert into test_data(PK,Data) values('1','');
> insert into test_data(PK,Data) values('2','');
> delete from test_data where PK='1';
> insert into test_data(PK,Data) values('1','');
> {noformat}
> h2. Execute the following commands
> {noformat}
> nodetool flush
> nodetool compact
> sstabledump mc-2-big-Data.db
> sstabledump -d mc-2-big-Data.db
> {noformat}
> h3. default dump - missing data for partiotion key = "1"
> {noformat}
> [
>   {
> "partition" : {
>   "key" : [ "0" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 15,
> "liveness_info" : { "tstamp" : "2017-06-14T12:23:13.529389Z" },
> "cells" : [
>   { "name" : "data", "value" : "" }
> ]
>   }
> ]
>   },
>   {
> "partition" : {
>   "key" : [ "2" ],
>   "position" : 26
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 41,
> "liveness_info" : { "tstamp" : "2017-06-14T12:23:13.544132Z" },
> "cells" : [
>   { "name" : "data", "value" : "" }
> ]
>   }
> ]
>   },
>   {
> "partition" : {
>   "key" : [ "1" ],
>   "position" : 53,
>   "deletion_info" : { "marked_deleted" : "2017-06-14T12:23:13.545988Z", 
> "local_delete_time" : "2017-06-14T12:23:13Z" }
> }
>   }
> ]
> {noformat}
> h3. dump with -d option - correct data for partiotion key = "1"
> {noformat}
> [0]@0 Row[info=[ts=1497442993529389] ]:  | [data= ts=1497442993529389]
> [2]@26 Row[info=[ts=1497442993544132] ]:  | [data= ts=1497442993544132]
> [1]@53 deletedAt=1497442993545988, localDeletion=1497442993
> [1]@53 Row[info=[ts=1497442993550159] ]:  | [data= ts=1497442993550159]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13600) sstabledump possible problem

2017-08-27 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13600:
-
Attachment: CASSANDRA-13600.patch

> sstabledump possible problem
> 
>
> Key: CASSANDRA-13600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13600
> Project: Cassandra
>  Issue Type: Bug
> Environment: Official cassandra docker image (last) under Win10
>Reporter: a8775
>  Labels: patch
> Fix For: 3.10
>
> Attachments: CASSANDRA-13600.patch
>
>
> h2. Possible bug in sstabledump
> {noformat}
> cqlsh> show version
> [cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native protocol v4]
> {noformat}
> h2. Execute script in cqlsh in new keyspace
> {noformat}
> CREATE TABLE IF NOT EXISTS test_data (   
> // partitioning key
> PK TEXT, 
> // data
> Data TEXT,
> 
> PRIMARY KEY (PK)
> );
> insert into test_data(PK,Data) values('0','');
> insert into test_data(PK,Data) values('1','');
> insert into test_data(PK,Data) values('2','');
> delete from test_data where PK='1';
> insert into test_data(PK,Data) values('1','');
> {noformat}
> h2. Execute the following commands
> {noformat}
> nodetool flush
> nodetool compact
> sstabledump mc-2-big-Data.db
> sstabledump -d mc-2-big-Data.db
> {noformat}
> h3. default dump - missing data for partiotion key = "1"
> {noformat}
> [
>   {
> "partition" : {
>   "key" : [ "0" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 15,
> "liveness_info" : { "tstamp" : "2017-06-14T12:23:13.529389Z" },
> "cells" : [
>   { "name" : "data", "value" : "" }
> ]
>   }
> ]
>   },
>   {
> "partition" : {
>   "key" : [ "2" ],
>   "position" : 26
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 41,
> "liveness_info" : { "tstamp" : "2017-06-14T12:23:13.544132Z" },
> "cells" : [
>   { "name" : "data", "value" : "" }
> ]
>   }
> ]
>   },
>   {
> "partition" : {
>   "key" : [ "1" ],
>   "position" : 53,
>   "deletion_info" : { "marked_deleted" : "2017-06-14T12:23:13.545988Z", 
> "local_delete_time" : "2017-06-14T12:23:13Z" }
> }
>   }
> ]
> {noformat}
> h3. dump with -d option - correct data for partiotion key = "1"
> {noformat}
> [0]@0 Row[info=[ts=1497442993529389] ]:  | [data= ts=1497442993529389]
> [2]@26 Row[info=[ts=1497442993544132] ]:  | [data= ts=1497442993544132]
> [1]@53 deletedAt=1497442993545988, localDeletion=1497442993
> [1]@53 Row[info=[ts=1497442993550159] ]:  | [data= ts=1497442993550159]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13600) sstabledump possible problem

2017-08-25 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141295#comment-16141295
 ] 

Varun Barala commented on CASSANDRA-13600:
--


This part has problem:- 
https://github.com/apache/cassandra/blob/cassandra-3.10/src/java/org/apache/cassandra/tools/JsonTransformer.java#L189
Since partition will have one position but can have multiple statuses(live info 
and delete info). I'll provide the patch.

> sstabledump possible problem
> 
>
> Key: CASSANDRA-13600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13600
> Project: Cassandra
>  Issue Type: Bug
> Environment: Official cassandra docker image (last) under Win10
>Reporter: a8775
> Fix For: 3.10
>
>
> h2. Possible bug in sstabledump
> {noformat}
> cqlsh> show version
> [cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native protocol v4]
> {noformat}
> h2. Execute script in cqlsh in new keyspace
> {noformat}
> CREATE TABLE IF NOT EXISTS test_data (   
> // partitioning key
> PK TEXT, 
> // data
> Data TEXT,
> 
> PRIMARY KEY (PK)
> );
> insert into test_data(PK,Data) values('0','');
> insert into test_data(PK,Data) values('1','');
> insert into test_data(PK,Data) values('2','');
> delete from test_data where PK='1';
> insert into test_data(PK,Data) values('1','');
> {noformat}
> h2. Execute the following commands
> {noformat}
> nodetool flush
> nodetool compact
> sstabledump mc-2-big-Data.db
> sstabledump -d mc-2-big-Data.db
> {noformat}
> h3. default dump - missing data for partiotion key = "1"
> {noformat}
> [
>   {
> "partition" : {
>   "key" : [ "0" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 15,
> "liveness_info" : { "tstamp" : "2017-06-14T12:23:13.529389Z" },
> "cells" : [
>   { "name" : "data", "value" : "" }
> ]
>   }
> ]
>   },
>   {
> "partition" : {
>   "key" : [ "2" ],
>   "position" : 26
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 41,
> "liveness_info" : { "tstamp" : "2017-06-14T12:23:13.544132Z" },
> "cells" : [
>   { "name" : "data", "value" : "" }
> ]
>   }
> ]
>   },
>   {
> "partition" : {
>   "key" : [ "1" ],
>   "position" : 53,
>   "deletion_info" : { "marked_deleted" : "2017-06-14T12:23:13.545988Z", 
> "local_delete_time" : "2017-06-14T12:23:13Z" }
> }
>   }
> ]
> {noformat}
> h3. dump with -d option - correct data for partiotion key = "1"
> {noformat}
> [0]@0 Row[info=[ts=1497442993529389] ]:  | [data= ts=1497442993529389]
> [2]@26 Row[info=[ts=1497442993544132] ]:  | [data= ts=1497442993544132]
> [1]@53 deletedAt=1497442993545988, localDeletion=1497442993
> [1]@53 Row[info=[ts=1497442993550159] ]:  | [data= ts=1497442993550159]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9375) setting timeouts to 1ms prevents startup

2017-08-22 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-9375:

Attachment: (was: CASSANDRA-9375_after_review_2.patch)

> setting timeouts to 1ms prevents startup
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Varun Barala
>Priority: Trivial
>  Labels: patch
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-9375_after_review, 
> CASSANDRA-9375_after_review_2.patch, CASSANDRA-9375.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9375) setting timeouts to 1ms prevents startup

2017-08-22 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-9375:

Attachment: CASSANDRA-9375_after_review_2.patch

> setting timeouts to 1ms prevents startup
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Varun Barala
>Priority: Trivial
>  Labels: patch
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-9375_after_review, 
> CASSANDRA-9375_after_review_2.patch, CASSANDRA-9375.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9375) setting timeouts to 1ms prevents startup

2017-08-22 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137643#comment-16137643
 ] 

Varun Barala commented on CASSANDRA-9375:
-

Thanks [~jjirsa] [~jasobrown] for the review. I updated as per your suggestions.
* Added this check inside {{DatabaseDescriptor}}
* Added junit test case

{{DatabaseDescriptor}} has been refactored in 3.11.0 
[https://github.com/apache/cassandra/commit/9797511c56df4e9c7db964a6b83e67642df96c2d#diff-a8a9935b164cd23da473fd45784fd1dd].
 I'll provide separate patch for this. Thanks!!

I have small doubt:
Do we need to also take care of {{DatabaseDescriptor}} setters? i.e. 
{{#setCasContentionTimeout(Long timeOutInMillis)}}

> setting timeouts to 1ms prevents startup
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Varun Barala
>Priority: Trivial
>  Labels: patch
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-9375_after_review, 
> CASSANDRA-9375_after_review_2.patch, CASSANDRA-9375.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CASSANDRA-9375) setting timeouts to 1ms prevents startup

2017-08-22 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-9375:

Attachment: CASSANDRA-9375_after_review_2.patch

> setting timeouts to 1ms prevents startup
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Varun Barala
>Priority: Trivial
>  Labels: patch
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-9375_after_review, 
> CASSANDRA-9375_after_review_2.patch, CASSANDRA-9375.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13670) NullPointerException while closing CQLSSTableWriter

2017-08-06 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115800#comment-16115800
 ] 

Varun Barala commented on CASSANDRA-13670:
--

[~arpanps] Can you please help me to reproduce this?


{code:java}
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.List;

import org.apache.cassandra.dht.Murmur3Partitioner;
import org.apache.cassandra.io.sstable.CQLSSTableWriter;
import org.supercsv.io.CsvListReader;
import org.supercsv.prefs.CsvPreference;

/**
 * 
 * @author ooo
 *
 */
public class CqlWriterTest {
private static final String createDDL = "CREATE TABLE testing.table2 
(pk1 text,pk2 text,ck1 text,ck2 text,nk1 text,nk2 text,PRIMARY KEY (( pk1, pk2 
), ck1, ck2));";
private static final String csvFilePath = 
"/home/ooo/cassandra3.0.14/apache-cassandra-3.0.14/var.csv";
private static final String insertDDL = "insert into testing.table2 
(pk1,pk2,ck1,ck2,nk1,nk2) VALUES (?,?,?,?,?,?);";
private static final String inputDir = 
"/home/ooo/cassandra3.0.14/apache-cassandra-3.0.14/sstables/tmp";

public static void main(String[] args) throws IOException {
CQLSSTableWriter.Builder builder = CQLSSTableWriter.builder();

builder.inDirectory(inputDir).forTable(createDDL).using(insertDDL).withPartitioner(new
 Murmur3Partitioner());
CQLSSTableWriter writer = builder.build();

try (BufferedReader reader = new BufferedReader(new 
FileReader(csvFilePath));
CsvListReader csvReader = new CsvListReader(reader, 
CsvPreference.STANDARD_PREFERENCE);) {
List line;
while ((line = csvReader.read()) != null) {
List bbl = new ArrayList<>();
for (String l : line) {
bbl.add(ByteBuffer.wrap(l.getBytes()));
}
writer.rawAddRow(bbl);
// If I use writer.addRow(); it works fine.
}
} finally {

writer.close();
}

}
}

{code}

It's working fine in my case.
{{writer.addRow()}} accepts object values not bin values.

java doc says:-
{code:java}
**
 * Adds a new row to the writer.
 * 
 * Each provided value type should correspond to the types of the CQL column
 * the value is for. The correspondance between java type and CQL type is 
the
 * same one than the one documented at
 * 
www.datastax.com/drivers/java/2.0/apidocs/com/datastax/driver/core/DataType.Name.html#asJavaClass().
 * 
 * If you prefer providing the values directly as binary, use
 * {@link #rawAddRow} instead.
 *
 * @param values the row values (corresponding to the bind variables of the
 * insertion statement used when creating by this writer).
 * @return this writer.
 */
  public CQLSSTableWriter addRow(List values)
{code}


> NullPointerException while closing CQLSSTableWriter
> ---
>
> Key: CASSANDRA-13670
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13670
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Linux
>Reporter: Arpan Khandelwal
> Fix For: 3.0.14
>
>
> Reading data from csv file and writing using CQLSSTableWriter. 
> {code:java}
>   CQLSSTableWriter.Builder builder = CQLSSTableWriter.builder();
> 
> builder.inDirectory(outputDir).forTable(createDDL).using(insertDML).withPartitioner(new
>  Murmur3Partitioner());
> CQLSSTableWriter writer = builder.build();
> {code}
> {code:java}
>  try (BufferedReader reader = new BufferedReader(new FileReader(csvFilePath));
> CsvListReader csvReader = new CsvListReader(reader, 
> CsvPreference.STANDARD_PREFERENCE);) {
> List line;
> while ((line = csvReader.read()) != null) {
> List bbl = new ArrayList<>();
> for (String l : line) {
> bbl.add(ByteBuffer.wrap(l.getBytes()));
> }
> writer.rawAddRow(bbl);
> // If I use writer.addRow(); it works fine.
> }
> } finally {
> writer.close();
> }
> {code}
> Getting below exception
> {code:java}
> java.lang.RuntimeException: java.lang.NullPointerException
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.close(SSTableSimpleUnsortedWriter.java:136)
> at 
> org.apache.cassandra.io.sstable.CQLSSTableWriter.close(CQLSSTableWriter.java:280)
> at com.cfx.cassandra.SSTableCreator.execute(SSTableCreator.java:155)
> at com.cfx.cassandra.SSTableCreator.main(SSTableCreator.java:84)
> Caused by: 

[jira] [Commented] (CASSANDRA-9375) setting timeouts to 1ms prevents startup

2017-08-06 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115787#comment-16115787
 ] 

Varun Barala commented on CASSANDRA-9375:
-

[~jasobrown] Thanks for the review. 
I updated few things:-
* log at info level
* added comments in cassandra.yaml

Please have a look, let me know your feedback. Thank you!!

> setting timeouts to 1ms prevents startup
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Varun Barala
>Priority: Trivial
>  Labels: patch
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-9375_after_review, CASSANDRA-9375.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9375) setting timeouts to 1ms prevents startup

2017-08-06 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-9375:

Attachment: CASSANDRA-9375_after_review

> setting timeouts to 1ms prevents startup
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Varun Barala
>Priority: Trivial
>  Labels: patch
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-9375_after_review, CASSANDRA-9375.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13721) "ignore" option is ignored in sstableloader

2017-07-23 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16097646#comment-16097646
 ] 

Varun Barala commented on CASSANDRA-13721:
--

Thank you so much!! It was my first patch which got accepted!! :) 

> "ignore" option is ignored in sstableloader
> ---
>
> Key: CASSANDRA-13721
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13721
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Sergey Yegournov
>Assignee: Varun Barala
>  Labels: patch
> Fix For: 3.11.1, 4.0
>
> Attachments: CASSANDRA-13721.patch
>
>
> If ignore option is set on the command line sstableloader still streams to 
> the nodes excluded.
> I believe the issue is in the 
> [https://github.com/apache/cassandra/blob/dfb90b1458ac6ee427f9e329b45c764a3a0a0c06/src/java/org/apache/cassandra/tools/LoaderOptions.java]
>  - the LoaderOptions constructor does not set the "ignores" field from the 
> the "builder.ignores"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9375) setting timeouts to 1ms prevents startup

2017-07-22 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-9375:

   Labels: patch  (was: )
Reproduced In: 2.1.0
   Status: Patch Available  (was: Open)

I added small logic to check lowest acceptable timeouts. Lowest acceptable 
values is 10ms in this patch.
I checked that this improvement is required for all c* versions.

Please do let me know If I missed some cases. Thanks!!

> setting timeouts to 1ms prevents startup
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Priority: Trivial
>  Labels: patch
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-9375.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9375) setting timeouts to 1ms prevents startup

2017-07-22 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-9375:

Attachment: CASSANDRA-9375.patch

> setting timeouts to 1ms prevents startup
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Priority: Trivial
>  Labels: patch
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-9375.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13721) "ignore" option is ignored in sstableloader

2017-07-22 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16097497#comment-16097497
 ] 

Varun Barala commented on CASSANDRA-13721:
--

cassandra {{tag >= 3.4}} are having this bug.

> "ignore" option is ignored in sstableloader
> ---
>
> Key: CASSANDRA-13721
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13721
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Sergey Yegournov
>  Labels: patch
> Attachments: CASSANDRA-13721.patch
>
>
> If ignore option is set on the command line sstableloader still streams to 
> the nodes excluded.
> I believe the issue is in the 
> [https://github.com/apache/cassandra/blob/dfb90b1458ac6ee427f9e329b45c764a3a0a0c06/src/java/org/apache/cassandra/tools/LoaderOptions.java]
>  - the LoaderOptions constructor does not set the "ignores" field from the 
> the "builder.ignores"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13721) "ignore" option is ignored in sstableloader

2017-07-22 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13721:
-
Labels: patch  (was: )
Status: Patch Available  (was: Open)

fixed constructor of {{LoaderOptions}}

> "ignore" option is ignored in sstableloader
> ---
>
> Key: CASSANDRA-13721
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13721
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Sergey Yegournov
>  Labels: patch
> Attachments: CASSANDRA-13721.patch
>
>
> If ignore option is set on the command line sstableloader still streams to 
> the nodes excluded.
> I believe the issue is in the 
> [https://github.com/apache/cassandra/blob/dfb90b1458ac6ee427f9e329b45c764a3a0a0c06/src/java/org/apache/cassandra/tools/LoaderOptions.java]
>  - the LoaderOptions constructor does not set the "ignores" field from the 
> the "builder.ignores"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13721) "ignore" option is ignored in sstableloader

2017-07-22 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13721:
-
Attachment: CASSANDRA-13721.patch

> "ignore" option is ignored in sstableloader
> ---
>
> Key: CASSANDRA-13721
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13721
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Sergey Yegournov
> Attachments: CASSANDRA-13721.patch
>
>
> If ignore option is set on the command line sstableloader still streams to 
> the nodes excluded.
> I believe the issue is in the 
> [https://github.com/apache/cassandra/blob/dfb90b1458ac6ee427f9e329b45c764a3a0a0c06/src/java/org/apache/cassandra/tools/LoaderOptions.java]
>  - the LoaderOptions constructor does not set the "ignores" field from the 
> the "builder.ignores"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13721) "ignore" option is ignored in sstableloader

2017-07-22 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16097391#comment-16097391
 ] 

Varun Barala commented on CASSANDRA-13721:
--

Yes, constructor does not set the {{ignores}}. I'm providing patch for this. 
Also need to check for other versions.

> "ignore" option is ignored in sstableloader
> ---
>
> Key: CASSANDRA-13721
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13721
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Sergey Yegournov
>
> If ignore option is set on the command line sstableloader still streams to 
> the nodes excluded.
> I believe the issue is in the 
> [https://github.com/apache/cassandra/blob/dfb90b1458ac6ee427f9e329b45c764a3a0a0c06/src/java/org/apache/cassandra/tools/LoaderOptions.java]
>  - the LoaderOptions constructor does not set the "ignores" field from the 
> the "builder.ignores"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9333) Edge case - Empty of blank password for JMX authentication not handled properly in nodetool commands

2017-07-21 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-9333:

Attachment: 2.1.2.png

> Edge case - Empty of blank password for JMX authentication not handled 
> properly in nodetool commands
> 
>
> Key: CASSANDRA-9333
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9333
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Apache Cassandra 2.1.2
>Reporter: Sumod Pawgi
>Priority: Minor
>  Labels: security
> Fix For: 2.1.x
>
> Attachments: 2.1.2.png
>
>
> While setting up JMX authentication for Apache Cassandra, if we set the 
> password blank (in the file - jmxremote.password), nodetool commands do not 
> work
> example creds are cassandra cassandra. In this case, for a secured cluster, 
> we run the nodetool command as - nodetool -u cassandra -pw cassandra status
> But if the password is kept as blank then we cannot execute nodetool command. 
> However, I believe that if a third party software used JMX authentication via 
> API, then they can use blank password for the operations. So this behavior 
> needs to be clarified and be consistent for this edge case scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9333) Edge case - Empty of blank password for JMX authentication not handled properly in nodetool commands

2017-07-21 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16097099#comment-16097099
 ] 

Varun Barala commented on CASSANDRA-9333:
-

In this scenario, You can use nodetool command like:-
"$ bin/nodetool -u cassandra status"
then It'll ask for password If your password is empty then just hit enter.

Though nodetool should accept "$ bin/nodetool -u cassandra -pw  status". I'll 
go through the code.

> Edge case - Empty of blank password for JMX authentication not handled 
> properly in nodetool commands
> 
>
> Key: CASSANDRA-9333
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9333
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Apache Cassandra 2.1.2
>Reporter: Sumod Pawgi
>Priority: Minor
>  Labels: security
> Fix For: 2.1.x
>
>
> While setting up JMX authentication for Apache Cassandra, if we set the 
> password blank (in the file - jmxremote.password), nodetool commands do not 
> work
> example creds are cassandra cassandra. In this case, for a secured cluster, 
> we run the nodetool command as - nodetool -u cassandra -pw cassandra status
> But if the password is kept as blank then we cannot execute nodetool command. 
> However, I believe that if a third party software used JMX authentication via 
> API, then they can use blank password for the operations. So this behavior 
> needs to be clarified and be consistent for this edge case scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-9333) Edge case - Empty of blank password for JMX authentication not handled properly in nodetool commands

2017-07-21 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16097099#comment-16097099
 ] 

Varun Barala edited comment on CASSANDRA-9333 at 7/22/17 4:12 AM:
--

In this scenario, You can use nodetool command like:-
{{$ bin/nodetool -u cassandra status}}
then It'll ask for password If your password is empty then just hit enter.

Though nodetool should accept {{$ bin/nodetool -u cassandra -pw  status}}. I'll 
go through the code.


was (Author: varuna):
In this scenario, You can use nodetool command like:-
"$ bin/nodetool -u cassandra status"
then It'll ask for password If your password is empty then just hit enter.

Though nodetool should accept "$ bin/nodetool -u cassandra -pw  status". I'll 
go through the code.

> Edge case - Empty of blank password for JMX authentication not handled 
> properly in nodetool commands
> 
>
> Key: CASSANDRA-9333
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9333
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Apache Cassandra 2.1.2
>Reporter: Sumod Pawgi
>Priority: Minor
>  Labels: security
> Fix For: 2.1.x
>
>
> While setting up JMX authentication for Apache Cassandra, if we set the 
> password blank (in the file - jmxremote.password), nodetool commands do not 
> work
> example creds are cassandra cassandra. In this case, for a secured cluster, 
> we run the nodetool command as - nodetool -u cassandra -pw cassandra status
> But if the password is kept as blank then we cannot execute nodetool command. 
> However, I believe that if a third party software used JMX authentication via 
> API, then they can use blank password for the operations. So this behavior 
> needs to be clarified and be consistent for this edge case scenario.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-21 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092937#comment-16092937
 ] 

Varun Barala edited comment on CASSANDRA-13694 at 7/22/17 2:59 AM:
---

In order to match with cqlsh input. I added one more date format in 
`*TimestampSerializer.java*`.

Previously default format was `-MM-dd HH:mmXX` which has minute level 
precision. In this patch I changed it to `-MM-dd HH:mm:ss.SSSXX`.

I appended at the end of *dateStringPatterns* array to make sure minimum 
changes.

Please do let me know If I didn;t consider any case. Thank you!!


was (Author: varuna):
In order to match with cqlsh input. I added one more date format in 
`*TimestampSerializer.java*`.

Previously default format was `-MM-dd HH:mmXX` which has minute level 
precision. In this patch I changed it to `-MM-dd HH:mm:ss.SSXX`.

I appended at the end of *dateStringPatterns* array to make sure minimum 
changes.

Please do let me know If I didn;t consider any case. Thank you!!

> sstabledump does not show full precision of timestamp columns
> -
>
> Key: CASSANDRA-13694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu 16.04 LTS
>Reporter: Tim Reeves
>  Labels: patch-available
> Fix For: 3.7
>
> Attachments: CASSANDRA-13694-after-review.patch, CASSANDRA-13694.patch
>
>
> Create a table:
> CREATE TABLE test_table (
> unit_no bigint,
> event_code text,
> active_time timestamp,
> ack_time timestamp,
> PRIMARY KEY ((unit_no, event_code), active_time)
> ) WITH CLUSTERING ORDER BY (active_time DESC)
> Insert a row:
> INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
>   VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
> toTimestamp(now()));
> Verify that it is in the database with a full timestamp:
> cqlsh:pentaho> select * from test_table;
>  unit_no | event_code | active_time | ack_time
> -++-+-
> 1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
> 14:52:39.919000+
> (1 rows)
> Write file:
> nodetool flush
> nodetool compact pentaho
> Use sstabledump:
> treeves@ubuntu:~$ sstabledump 
> /var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1234", "TEST EVENT" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 38,
> "clustering" : [ "2017-07-14 15:52+0100" ],
> "liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
> "cells" : [
>   { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
> ]
>   }
> ]
>   }
> ]
> treeves@ubuntu:~$ 
> The timestamp in the cluster key, and the regular column, are both truncated 
> to the minute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-20 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13694:
-
Attachment: CASSANDRA-13694-after-review.patch

> sstabledump does not show full precision of timestamp columns
> -
>
> Key: CASSANDRA-13694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu 16.04 LTS
>Reporter: Tim Reeves
>  Labels: patch-available
> Fix For: 3.7
>
> Attachments: CASSANDRA-13694-after-review.patch, CASSANDRA-13694.patch
>
>
> Create a table:
> CREATE TABLE test_table (
> unit_no bigint,
> event_code text,
> active_time timestamp,
> ack_time timestamp,
> PRIMARY KEY ((unit_no, event_code), active_time)
> ) WITH CLUSTERING ORDER BY (active_time DESC)
> Insert a row:
> INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
>   VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
> toTimestamp(now()));
> Verify that it is in the database with a full timestamp:
> cqlsh:pentaho> select * from test_table;
>  unit_no | event_code | active_time | ack_time
> -++-+-
> 1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
> 14:52:39.919000+
> (1 rows)
> Write file:
> nodetool flush
> nodetool compact pentaho
> Use sstabledump:
> treeves@ubuntu:~$ sstabledump 
> /var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1234", "TEST EVENT" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 38,
> "clustering" : [ "2017-07-14 15:52+0100" ],
> "liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
> "cells" : [
>   { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
> ]
>   }
> ]
>   }
> ]
> treeves@ubuntu:~$ 
> The timestamp in the cluster key, and the regular column, are both truncated 
> to the minute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-20 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094372#comment-16094372
 ] 

Varun Barala commented on CASSANDRA-13694:
--

[~jjirsa] Thanks for the review. I totally agree with you. In second patch, I 
exposed new function {{AbstractType#getStringHandlesTimestamp}}. This will 
only be used by {{JsonTransformer}}.

Please have a look. Thanks!!

> sstabledump does not show full precision of timestamp columns
> -
>
> Key: CASSANDRA-13694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu 16.04 LTS
>Reporter: Tim Reeves
>  Labels: patch-available
> Fix For: 3.7
>
> Attachments: CASSANDRA-13694.patch
>
>
> Create a table:
> CREATE TABLE test_table (
> unit_no bigint,
> event_code text,
> active_time timestamp,
> ack_time timestamp,
> PRIMARY KEY ((unit_no, event_code), active_time)
> ) WITH CLUSTERING ORDER BY (active_time DESC)
> Insert a row:
> INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
>   VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
> toTimestamp(now()));
> Verify that it is in the database with a full timestamp:
> cqlsh:pentaho> select * from test_table;
>  unit_no | event_code | active_time | ack_time
> -++-+-
> 1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
> 14:52:39.919000+
> (1 rows)
> Write file:
> nodetool flush
> nodetool compact pentaho
> Use sstabledump:
> treeves@ubuntu:~$ sstabledump 
> /var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1234", "TEST EVENT" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 38,
> "clustering" : [ "2017-07-14 15:52+0100" ],
> "liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
> "cells" : [
>   { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
> ]
>   }
> ]
>   }
> ]
> treeves@ubuntu:~$ 
> The timestamp in the cluster key, and the regular column, are both truncated 
> to the minute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-19 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13694:
-
Status: Ready to Commit  (was: Patch Available)

> sstabledump does not show full precision of timestamp columns
> -
>
> Key: CASSANDRA-13694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu 16.04 LTS
>Reporter: Tim Reeves
>  Labels: patch
> Fix For: 3.7
>
> Attachments: CASSANDRA-13694.patch
>
>
> Create a table:
> CREATE TABLE test_table (
> unit_no bigint,
> event_code text,
> active_time timestamp,
> ack_time timestamp,
> PRIMARY KEY ((unit_no, event_code), active_time)
> ) WITH CLUSTERING ORDER BY (active_time DESC)
> Insert a row:
> INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
>   VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
> toTimestamp(now()));
> Verify that it is in the database with a full timestamp:
> cqlsh:pentaho> select * from test_table;
>  unit_no | event_code | active_time | ack_time
> -++-+-
> 1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
> 14:52:39.919000+
> (1 rows)
> Write file:
> nodetool flush
> nodetool compact pentaho
> Use sstabledump:
> treeves@ubuntu:~$ sstabledump 
> /var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1234", "TEST EVENT" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 38,
> "clustering" : [ "2017-07-14 15:52+0100" ],
> "liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
> "cells" : [
>   { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
> ]
>   }
> ]
>   }
> ]
> treeves@ubuntu:~$ 
> The timestamp in the cluster key, and the regular column, are both truncated 
> to the minute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-19 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13694:
-
Labels: patch-available  (was: patch)
Status: Patch Available  (was: Open)

> sstabledump does not show full precision of timestamp columns
> -
>
> Key: CASSANDRA-13694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu 16.04 LTS
>Reporter: Tim Reeves
>  Labels: patch-available
> Fix For: 3.7
>
> Attachments: CASSANDRA-13694.patch
>
>
> Create a table:
> CREATE TABLE test_table (
> unit_no bigint,
> event_code text,
> active_time timestamp,
> ack_time timestamp,
> PRIMARY KEY ((unit_no, event_code), active_time)
> ) WITH CLUSTERING ORDER BY (active_time DESC)
> Insert a row:
> INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
>   VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
> toTimestamp(now()));
> Verify that it is in the database with a full timestamp:
> cqlsh:pentaho> select * from test_table;
>  unit_no | event_code | active_time | ack_time
> -++-+-
> 1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
> 14:52:39.919000+
> (1 rows)
> Write file:
> nodetool flush
> nodetool compact pentaho
> Use sstabledump:
> treeves@ubuntu:~$ sstabledump 
> /var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1234", "TEST EVENT" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 38,
> "clustering" : [ "2017-07-14 15:52+0100" ],
> "liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
> "cells" : [
>   { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
> ]
>   }
> ]
>   }
> ]
> treeves@ubuntu:~$ 
> The timestamp in the cluster key, and the regular column, are both truncated 
> to the minute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-19 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13694:
-
Status: Open  (was: Ready to Commit)

> sstabledump does not show full precision of timestamp columns
> -
>
> Key: CASSANDRA-13694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu 16.04 LTS
>Reporter: Tim Reeves
>  Labels: patch
> Fix For: 3.7
>
> Attachments: CASSANDRA-13694.patch
>
>
> Create a table:
> CREATE TABLE test_table (
> unit_no bigint,
> event_code text,
> active_time timestamp,
> ack_time timestamp,
> PRIMARY KEY ((unit_no, event_code), active_time)
> ) WITH CLUSTERING ORDER BY (active_time DESC)
> Insert a row:
> INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
>   VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
> toTimestamp(now()));
> Verify that it is in the database with a full timestamp:
> cqlsh:pentaho> select * from test_table;
>  unit_no | event_code | active_time | ack_time
> -++-+-
> 1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
> 14:52:39.919000+
> (1 rows)
> Write file:
> nodetool flush
> nodetool compact pentaho
> Use sstabledump:
> treeves@ubuntu:~$ sstabledump 
> /var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1234", "TEST EVENT" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 38,
> "clustering" : [ "2017-07-14 15:52+0100" ],
> "liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
> "cells" : [
>   { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
> ]
>   }
> ]
>   }
> ]
> treeves@ubuntu:~$ 
> The timestamp in the cluster key, and the regular column, are both truncated 
> to the minute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-19 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092942#comment-16092942
 ] 

Varun Barala commented on CASSANDRA-13694:
--

After this patch output will look like:-

[
  {
"partition" : {
  "key" : [ "1234", "TEST EVENT" ],
  "position" : 0
},
"rows" : [
  {
"type" : "row",
"position" : 38,
"clustering" : [ "1970-01-18 16:16:13.000183+0730" ],
"liveness_info" : { "tstamp" : "2017-07-18T09:19:55.623Z" },
"cells" : [
  { "name" : "ack_time", "value" : "1970-01-18 16:16:13.03+0730" }
]
  }
]
  }
]

> sstabledump does not show full precision of timestamp columns
> -
>
> Key: CASSANDRA-13694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu 16.04 LTS
>Reporter: Tim Reeves
>  Labels: patch
> Fix For: 3.7
>
> Attachments: CASSANDRA-13694.patch
>
>
> Create a table:
> CREATE TABLE test_table (
> unit_no bigint,
> event_code text,
> active_time timestamp,
> ack_time timestamp,
> PRIMARY KEY ((unit_no, event_code), active_time)
> ) WITH CLUSTERING ORDER BY (active_time DESC)
> Insert a row:
> INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
>   VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
> toTimestamp(now()));
> Verify that it is in the database with a full timestamp:
> cqlsh:pentaho> select * from test_table;
>  unit_no | event_code | active_time | ack_time
> -++-+-
> 1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
> 14:52:39.919000+
> (1 rows)
> Write file:
> nodetool flush
> nodetool compact pentaho
> Use sstabledump:
> treeves@ubuntu:~$ sstabledump 
> /var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1234", "TEST EVENT" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 38,
> "clustering" : [ "2017-07-14 15:52+0100" ],
> "liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
> "cells" : [
>   { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
> ]
>   }
> ]
>   }
> ]
> treeves@ubuntu:~$ 
> The timestamp in the cluster key, and the regular column, are both truncated 
> to the minute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-19 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13694:
-
Attachment: (was: CASSANDRA-13694)

> sstabledump does not show full precision of timestamp columns
> -
>
> Key: CASSANDRA-13694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu 16.04 LTS
>Reporter: Tim Reeves
>  Labels: patch
> Fix For: 3.7
>
> Attachments: CASSANDRA-13694.patch
>
>
> Create a table:
> CREATE TABLE test_table (
> unit_no bigint,
> event_code text,
> active_time timestamp,
> ack_time timestamp,
> PRIMARY KEY ((unit_no, event_code), active_time)
> ) WITH CLUSTERING ORDER BY (active_time DESC)
> Insert a row:
> INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
>   VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
> toTimestamp(now()));
> Verify that it is in the database with a full timestamp:
> cqlsh:pentaho> select * from test_table;
>  unit_no | event_code | active_time | ack_time
> -++-+-
> 1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
> 14:52:39.919000+
> (1 rows)
> Write file:
> nodetool flush
> nodetool compact pentaho
> Use sstabledump:
> treeves@ubuntu:~$ sstabledump 
> /var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1234", "TEST EVENT" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 38,
> "clustering" : [ "2017-07-14 15:52+0100" ],
> "liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
> "cells" : [
>   { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
> ]
>   }
> ]
>   }
> ]
> treeves@ubuntu:~$ 
> The timestamp in the cluster key, and the regular column, are both truncated 
> to the minute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-19 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13694:
-
Attachment: CASSANDRA-13694.patch

> sstabledump does not show full precision of timestamp columns
> -
>
> Key: CASSANDRA-13694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu 16.04 LTS
>Reporter: Tim Reeves
>  Labels: patch
> Fix For: 3.7
>
> Attachments: CASSANDRA-13694.patch
>
>
> Create a table:
> CREATE TABLE test_table (
> unit_no bigint,
> event_code text,
> active_time timestamp,
> ack_time timestamp,
> PRIMARY KEY ((unit_no, event_code), active_time)
> ) WITH CLUSTERING ORDER BY (active_time DESC)
> Insert a row:
> INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
>   VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
> toTimestamp(now()));
> Verify that it is in the database with a full timestamp:
> cqlsh:pentaho> select * from test_table;
>  unit_no | event_code | active_time | ack_time
> -++-+-
> 1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
> 14:52:39.919000+
> (1 rows)
> Write file:
> nodetool flush
> nodetool compact pentaho
> Use sstabledump:
> treeves@ubuntu:~$ sstabledump 
> /var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1234", "TEST EVENT" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 38,
> "clustering" : [ "2017-07-14 15:52+0100" ],
> "liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
> "cells" : [
>   { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
> ]
>   }
> ]
>   }
> ]
> treeves@ubuntu:~$ 
> The timestamp in the cluster key, and the regular column, are both truncated 
> to the minute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-19 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13694:
-
Attachment: CASSANDRA-13694

> sstabledump does not show full precision of timestamp columns
> -
>
> Key: CASSANDRA-13694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu 16.04 LTS
>Reporter: Tim Reeves
>  Labels: patch
> Fix For: 3.7
>
> Attachments: CASSANDRA-13694.patch
>
>
> Create a table:
> CREATE TABLE test_table (
> unit_no bigint,
> event_code text,
> active_time timestamp,
> ack_time timestamp,
> PRIMARY KEY ((unit_no, event_code), active_time)
> ) WITH CLUSTERING ORDER BY (active_time DESC)
> Insert a row:
> INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
>   VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
> toTimestamp(now()));
> Verify that it is in the database with a full timestamp:
> cqlsh:pentaho> select * from test_table;
>  unit_no | event_code | active_time | ack_time
> -++-+-
> 1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
> 14:52:39.919000+
> (1 rows)
> Write file:
> nodetool flush
> nodetool compact pentaho
> Use sstabledump:
> treeves@ubuntu:~$ sstabledump 
> /var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1234", "TEST EVENT" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 38,
> "clustering" : [ "2017-07-14 15:52+0100" ],
> "liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
> "cells" : [
>   { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
> ]
>   }
> ]
>   }
> ]
> treeves@ubuntu:~$ 
> The timestamp in the cluster key, and the regular column, are both truncated 
> to the minute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-19 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13694:
-
   Labels: patch  (was: )
Reproduced In: 3.7
   Status: Patch Available  (was: Open)

In order to match with cqlsh input. I added one more date format in 
`*TimestampSerializer.java*`.

Previously default format was `-MM-dd HH:mmXX` which has minute level 
precision. In this patch I changed it to `-MM-dd HH:mm:ss.SSXX`.

I appended at the end of *dateStringPatterns* array to make sure minimum 
changes.

Please do let me know If I didn;t consider any case. Thank you!!

> sstabledump does not show full precision of timestamp columns
> -
>
> Key: CASSANDRA-13694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu 16.04 LTS
>Reporter: Tim Reeves
>  Labels: patch
> Fix For: 3.7
>
>
> Create a table:
> CREATE TABLE test_table (
> unit_no bigint,
> event_code text,
> active_time timestamp,
> ack_time timestamp,
> PRIMARY KEY ((unit_no, event_code), active_time)
> ) WITH CLUSTERING ORDER BY (active_time DESC)
> Insert a row:
> INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
>   VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
> toTimestamp(now()));
> Verify that it is in the database with a full timestamp:
> cqlsh:pentaho> select * from test_table;
>  unit_no | event_code | active_time | ack_time
> -++-+-
> 1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
> 14:52:39.919000+
> (1 rows)
> Write file:
> nodetool flush
> nodetool compact pentaho
> Use sstabledump:
> treeves@ubuntu:~$ sstabledump 
> /var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1234", "TEST EVENT" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 38,
> "clustering" : [ "2017-07-14 15:52+0100" ],
> "liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
> "cells" : [
>   { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
> ]
>   }
> ]
>   }
> ]
> treeves@ubuntu:~$ 
> The timestamp in the cluster key, and the regular column, are both truncated 
> to the minute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-18 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092561#comment-16092561
 ] 

Varun Barala commented on CASSANDRA-13694:
--

in DevCenter It shows like this:-
```
1234,TEST EVENT,1970-01-18 16:16:13+0730,1970-01-18 16:16:13+0730
```

It depends on `TimestampSerializer` `toString()` method.
Solution:-
* We can modify the default `toString` method's format
but need to check, It'll not break or cause unexpected behavior. Thanks!!

> sstabledump does not show full precision of timestamp columns
> -
>
> Key: CASSANDRA-13694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu 16.04 LTS
>Reporter: Tim Reeves
> Fix For: 3.7
>
>
> Create a table:
> CREATE TABLE test_table (
> unit_no bigint,
> event_code text,
> active_time timestamp,
> ack_time timestamp,
> PRIMARY KEY ((unit_no, event_code), active_time)
> ) WITH CLUSTERING ORDER BY (active_time DESC)
> Insert a row:
> INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
>   VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
> toTimestamp(now()));
> Verify that it is in the database with a full timestamp:
> cqlsh:pentaho> select * from test_table;
>  unit_no | event_code | active_time | ack_time
> -++-+-
> 1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
> 14:52:39.919000+
> (1 rows)
> Write file:
> nodetool flush
> nodetool compact pentaho
> Use sstabledump:
> treeves@ubuntu:~$ sstabledump 
> /var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
> [
>   {
> "partition" : {
>   "key" : [ "1234", "TEST EVENT" ],
>   "position" : 0
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 38,
> "clustering" : [ "2017-07-14 15:52+0100" ],
> "liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
> "cells" : [
>   { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
> ]
>   }
> ]
>   }
> ]
> treeves@ubuntu:~$ 
> The timestamp in the cluster key, and the regular column, are both truncated 
> to the minute.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13653) Create meaningful toString() methods

2017-07-09 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079472#comment-16079472
 ] 

Varun Barala commented on CASSANDRA-13653:
--

[~agacha] You don't need to request just assign it to yourself and provide a 
patch. 
* Reviewers will review the patch
* Committers will commit it 

> Create meaningful toString() methods
> 
>
> Key: CASSANDRA-13653
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13653
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Priority: Trivial
>  Labels: lhf, low-hanging-fruit
>
> True low-hanging fruit, good for a first-time contributor:
> There are a lot of classes without meaningful {{toString()}} implementations. 
> Some of these would be very nice to have for investigating bug reports.
> Some good places to start: 
> - CQL3 statements (UpdateStatement, DeleteStatement, etc), QueryOptions, and 
> Restrictions
> Some packages not to worry about: 
> - Deep internals that don't already have them 
> (org.apache.cassandra.db.rows/partitions/etc)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13532) sstabledump reports incorrect usage for argument order

2017-06-25 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13532:
-
Attachment: sstabledump#printUsage.patch

PFA the patch file

> sstabledump reports incorrect usage for argument order
> --
>
> Key: CASSANDRA-13532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13532
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ian Ilsley
>Priority: Minor
>  Labels: lhf
> Attachments: sstabledump#printUsage.patch
>
>
> sstabledump usage reports 
> {{usage: sstabledump  }}
> However the actual usage is 
> {{sstabledump   }}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13532) sstabledump reports incorrect usage for argument order

2017-06-25 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-13532:
-
 Reviewer: ZhaoYang
Reproduced In: 3.0.4
   Status: Patch Available  (was: Open)

I'm submitting a patch to fix the `printUsage` function of 
`SSTableExport.java`. 

> sstabledump reports incorrect usage for argument order
> --
>
> Key: CASSANDRA-13532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13532
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ian Ilsley
>Priority: Minor
>  Labels: lhf
>
> sstabledump usage reports 
> {{usage: sstabledump  }}
> However the actual usage is 
> {{sstabledump   }}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-04 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270240#comment-15270240
 ] 

Varun Barala commented on CASSANDRA-11679:
--

okay thanks!!

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It returns 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows (because 
> the default fetch size is 5000). In `cqlsh` it returns 503 rows (because 
> cqlsh uses fetch size=100).
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-03 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268172#comment-15268172
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/3/16 6:25 AM:
--

I also checked all duplicate keys and I found a pattern that after every fetch 
size there was a duplicate key.

Example:-
--
For total 498 keys and fetch size = 100.
(1) duplicate key -> 101
(2) duplicate key -> 201
(3) duplicate key -> 301
(4) duplicate key -> 401
(5) duplicate key -> 501

so total it returns 503 {498+5}. 

I hope this will help you to investigate the problem.


was (Author: varuna):
I also checked all duplicate keys and I found a pattern that after every fetch 
size there were a duplicate key.

Example:-
--
For total 498 keys and fetch size = 100.
(1) duplicate key -> 101
(2) duplicate key -> 201
(3) duplicate key -> 301
(4) duplicate key -> 401
(5) duplicate key -> 501

so total it returns 503 {498+5}. 

I hope this will help you to investigate the problem.

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It returns 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows (because 
> the default fetch size is 5000). In `cqlsh` it returns 503 rows (because 
> cqlsh uses fetch size=100).
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-03 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268172#comment-15268172
 ] 

Varun Barala commented on CASSANDRA-11679:
--

I also checked all duplicate keys and I found a pattern that after every fetch 
size there were a duplicate key.

Example:-
--
For total 498 keys and fetch size = 100.
(1) duplicate key -> 101
(2) duplicate key -> 201
(3) duplicate key -> 301
(4) duplicate key -> 401
(5) duplicate key -> 501

so total it returns 503 {498+5}. 

I hope this will help you to investigate the problem.

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It returns 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows (because 
> the default fetch size is 5000). In `cqlsh` it returns 503 rows (because 
> cqlsh uses fetch size=100).
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266759#comment-15266759
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 3:09 PM:
--

this junit test is failing in your local, right ?

No I didn't change anything and this junit test case is not failing in my 
Cassandra-2.1.13.

 


was (Author: varuna):
this junit test is failing in your local, right ?

No I didn't change anything and this junit test case is not failing in 
Cassandra-2.1.13.

 

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It returns 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows (because 
> the default fetch size is 5000). In `cqlsh` it returns 503 rows (because 
> cqlsh uses fetch size=100).
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266759#comment-15266759
 ] 

Varun Barala commented on CASSANDRA-11679:
--

this junit test is failing in your local, right ?

No I didn't change anything and this junit test case is not failing in 
Cassandra-2.1.13.

 

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It returns 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows (because 
> the default fetch size is 5000). In `cqlsh` it returns 503 rows (because 
> cqlsh uses fetch size=100).
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-11679:
-
Description: 
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform a query get All distinctKeys It 
returns 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows (because the 
default fetch size is 5000). In `cqlsh` it returns 503 rows (because cqlsh uses 
fetch size=100).

Some Additional and useful information :- 
---
Cassandra-2.1.13  (C)* version
Consistency level: ONE 
local machine(ubuntu 14.04)

Table Schema:-
--

{code:xml}
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';


{code}

query :-

{code:xml}
SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
{code}

  was:
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform a query get All distinctKeys It 
return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

Some Additional and useful information :- 
---
Cassandra-2.1.13  (C)* version
Consistency level: ONE 
local machine(ubuntu 14.04)

Table Schema:-
--

{code:xml}
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';


{code}

query :-

{code:xml}
SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
{code}


> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It returns 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows (because 
> the default fetch size is 5000). In `cqlsh` it returns 503 rows (because 
> cqlsh uses fetch size=100).
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 

[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 2:28 PM:
--

I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  pk1 text," +
"  pk2 text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((pk1,pk2))" +
" );");


String insertQuery = "insert into junit.magic (" +
"pk1," +
"pk2," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'pk1',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct pk1,pk2 from junit.magic;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}


was (Author: varuna):
I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  pk1 text," +
"  pk2 text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((pk1,pk2))" +
" );");


String insertQuery = 

[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 2:23 PM:
--

I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  pk1 text," +
"  pk2 text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((pk1,pk2))" +
" );");


String insertQuery = "insert into junit.magic (" +
"pk1," +
"pk2," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'test_tenant',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct pk1,pk2 from junit.magic;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}


was (Author: varuna):
I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
com.datastax.driver.core.LererCheckTest

{code}

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
>  

[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 2:22 PM:
--

I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
com.datastax.driver.core.LererCheckTest

{code}


was (Author: varuna):
I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
" );");


String insertQuery = "insert into junit.magic (" +
"tenant_id," +
"str_key," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'test_tenant',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct tenant_id,str_key from 
junit.magic;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 

[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 2:19 PM:
--

I'll check this one But Meanwhile I'm gonna share one test case with you which 
will reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
" );");


String insertQuery = "insert into junit.magic (" +
"tenant_id," +
"str_key," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'test_tenant',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct tenant_id,str_key from 
junit.magic;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}


was (Author: varuna):
I'll check this one But I'm gonna share one test case with you which will 
reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
   

[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 2:15 PM:
--

I'll check this one But I'm gonna share one test case with you which will 
reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
" );");


String insertQuery = "insert into junit.magic (" +
"tenant_id," +
"str_key," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'test_tenant',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct tenant_id,str_key from 
junit.magic;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}


was (Author: varuna):
I'll check this one But I'm gonna share one test case with you which will 
reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
 

[jira] [Comment Edited] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala edited comment on CASSANDRA-11679 at 5/2/16 2:14 PM:
--

I'll check this one But I'm gonna share one test case with you which will 
reproduce the problem :-

{code:xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
" );");


String insertQuery = "insert into junit.magic (" +
"tenant_id," +
"str_key," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'test_tenant',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct tenant_id,str_key from 
product.magic498;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}


was (Author: varuna):
I'll check this one But I'm gonna share one test case with you which will 
reproduce the problem :-

{code-xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +

[jira] [Commented] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-05-02 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1520#comment-1520
 ] 

Varun Barala commented on CASSANDRA-11679:
--

I'll check this one But I'm gonna share one test case with you which will 
reproduce the problem :-

{code-xml}
package com.datastax.driver.core;

import static org.junit.Assert.*;

import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class LererCheckTest {
private static Cluster cluster;
private static Session session;


@BeforeClass
public static void init(){
cluster=Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
}

@AfterClass
public static void close(){
cluster.close();
}

@Before
public void initData(){

session.execute("drop keyspace if exists junit;");
session.execute("create keyspace junit WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor' : 1 };");

session.execute("CREATE TABLE junit.magic (" +
"  tenant_id text," +
"  str_key text," +
"  row_id int," +
"  value int," +
"  PRIMARY KEY((tenant_id,str_key))" +
" );");


String insertQuery = "insert into junit.magic (" +
"tenant_id," +
"str_key," +
"row_id," +
"value" +
")" +
"VALUES ( " +
"'test_tenant',"+
"'%s',"+
"null,"+
"0" +
");";

for(int i=0;i<498;i++){
session.execute(String.format(insertQuery, ""+i));
}
System.out.println("498 records inserted successfully!!!");
}


@Test
public void checkDistKeysForMagic498(){

String query = "select distinct tenant_id,str_key from 
product.magic498;";
Statement stsatementWithModifiedFetchSize = new 
SimpleStatement(query);
Statement statementWithDefaultFetchSize= new 
SimpleStatement(query);
stsatementWithModifiedFetchSize.setFetchSize(100);

// result set for default fetch size
ResultSet resultSetForDefaultFetchSize = 
session.execute(statementWithDefaultFetchSize);
int totalDistinctKeysForDefaultFetchSize = 
resultSetForDefaultFetchSize.all().size();
assertEquals(498, totalDistinctKeysForDefaultFetchSize);

// result set with fetch size as 100 <=498
ResultSet resultSetForModifiedDetchSize = 
session.execute(stsatementWithModifiedFetchSize);
int totalDistinctKeysForModifiedFetchSize = 
resultSetForModifiedDetchSize.all().size();
assertEquals(503, totalDistinctKeysForModifiedFetchSize);
}
}

{code}

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = 

[jira] [Commented] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-04-29 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264372#comment-15264372
 ] 

Varun Barala commented on CASSANDRA-11679:
--

Yes I tried 100 as a fetch size and it was returning 503 distinct keys. In fact 
for all fetch size <=498 it returns 503 distinct keys. 

I didn't check the impact on the number of total rows returned But If `select 
*` query fetches the results corresponding to all distinct keys then yes It 
will have an impact.

> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>Assignee: Benjamin Lerer
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-04-28 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-11679:
-
Description: 
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform a query get All distinctKeys It 
return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

Some Additional and useful information :- 
---
Cassandra-2.1.13  (C)* version
Consistency level: ONE 
local machine(ubuntu 14.04)

Table Schema:-
--

{code:xml}
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';


{code}

query :-

{code:xml}
SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
{code}

  was:
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform a query get All distinctKeys It 
return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.


Table Schema:-
--

{code:xml}
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';


{code}

query :-

{code:xml}
SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
{code}


> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Some Additional and useful information :- 
> ---
> Cassandra-2.1.13  (C)* version
> Consistency level: ONE 
> local machine(ubuntu 14.04)
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message 

[jira] [Updated] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-04-28 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-11679:
-
Description: 
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform a query get All distinctKeys It 
return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.


Table Schema:-
--

{code:xml}
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';


{code}

query :-

{code:xml}
SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
{code}

  was:
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform the query get All distinctKeys 
It return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

public void abc

Table Schema:-
--
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

query :-

SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;



> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform a query get All distinctKeys 
> It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Table Schema:-
> --
> {code:xml}
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> query :-
> 
> {code:xml}
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-04-28 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-11679:
-
Description: 
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform the query get All distinctKeys 
It return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

public void abc

Table Schema:-
--
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

query :-

SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;


  was:
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform the query get All distinctKeys 
It return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

`Table Schema:-
--
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
`
`
query :-

SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
`


> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform the query get All 
> distinctKeys It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> public void abc
> Table Schema:-
> --
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> query :-
> 
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-04-28 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-11679:
-
Description: 
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform the query get All distinctKeys 
It return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

'''
Table Schema:-
--
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
'''
'''
query :-

SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
'''

  was:
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform the query get All distinctKeys 
It return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

Table Schema:-
--
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

query :-

SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;


> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform the query get All 
> distinctKeys It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> '''
> Table Schema:-
> --
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> '''
> '''
> query :-
> 
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> '''



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-04-28 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-11679:
-
Description: 
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform the query get All distinctKeys 
It return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

`Table Schema:-
--
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
`
`
query :-

SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
`

  was:
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform the query get All distinctKeys 
It return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

'''
Table Schema:-
--
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
'''
'''
query :-

SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
'''


> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform the query get All 
> distinctKeys It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> `Table Schema:-
> --
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> `
> `
> query :-
> 
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;
> `



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-04-28 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-11679:
-
Description: 
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform the query get All distinctKeys 
It return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

Table Schema:-
--
CREATE TABLE sample (
 pk1 text,
 pk2 text,
row_id uuid,
value blob,
PRIMARY KEY (( pk1,  pk2))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

query :-

SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;

  was:
I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform the query get All distinctKeys 
It return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

Table Schema:-
--
CREATE TABLE product.kg_ky_wk_taisyo_kikan (
tenant_id text,
str_key text,
row_id uuid,
value blob,
PRIMARY KEY ((tenant_id, str_key))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

query :-

SELECT DISTINCT str_key,tenant_id FROM product.kg_ky_wk_taisyo_kikan LIMIT 
2147483647;


> Cassandra Driver returns different number of results depending on fetchsize
> ---
>
> Key: CASSANDRA-11679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Varun Barala
>
> I'm trying to fetch all distinct keys from a CF using cassandra-driver 
> (2.1.7.1) and I observed some strange behavior :-
> The total distinct rows are 498 so If I perform the query get All 
> distinctKeys It return 503 instead of 498(five keys twice).
> But If I define the fetch size in select statement more than 498 then it 
> returns exact 498 rows. 
> And If I execute same statement on Dev-center it returns 498 rows.
> Table Schema:-
> --
> CREATE TABLE sample (
>  pk1 text,
>  pk2 text,
> row_id uuid,
> value blob,
> PRIMARY KEY (( pk1,  pk2))
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> query :-
> 
> SELECT DISTINCT  pk2, pk1 FROM sample LIMIT 2147483647;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11679) Cassandra Driver returns different number of results depending on fetchsize

2016-04-28 Thread Varun Barala (JIRA)
Varun Barala created CASSANDRA-11679:


 Summary: Cassandra Driver returns different number of results 
depending on fetchsize
 Key: CASSANDRA-11679
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11679
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
Reporter: Varun Barala


I'm trying to fetch all distinct keys from a CF using cassandra-driver 
(2.1.7.1) and I observed some strange behavior :-

The total distinct rows are 498 so If I perform the query get All distinctKeys 
It return 503 instead of 498(five keys twice).
But If I define the fetch size in select statement more than 498 then it 
returns exact 498 rows. 

And If I execute same statement on Dev-center it returns 498 rows.

Table Schema:-
--
CREATE TABLE product.kg_ky_wk_taisyo_kikan (
tenant_id text,
str_key text,
row_id uuid,
value blob,
PRIMARY KEY ((tenant_id, str_key))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

query :-

SELECT DISTINCT str_key,tenant_id FROM product.kg_ky_wk_taisyo_kikan LIMIT 
2147483647;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11441) Filtering based on partition key feature in bulkLoader utility

2016-03-26 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-11441:
-
Issue Type: New Feature  (was: Improvement)

> Filtering based on partition key feature in bulkLoader utility 
> ---
>
> Key: CASSANDRA-11441
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11441
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Varun Barala
>
> This feature will support user to transfer only required part of sstable 
> instead of  entire sstable. 
> Usage :-
> If someone has a CF of composite partition key. Let's say 
> [user(text),id(uuid)].
> Now if he/she only wants to transfer data of some given 'user A' not entire 
> SSTable. 
> So we can add one more parameter in our BulkLoader program 'filtering in 
> partition key'. 
> cmd will look like :- 
> bin/sstableLoader -h  --filtering  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11370) Display sstable count per level according to repair status on nodetool tablestats

2016-03-26 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212978#comment-15212978
 ] 

Varun Barala edited comment on CASSANDRA-11370 at 3/26/16 12:03 PM:


Robert Stupp It was my first time so I was not aware of this thing. Even I also 
tried to undo that thing but It didn't work.

Sorry for any inconvenience. 

Ohh now I got it.


was (Author: varuna):
Robert Stupp It was my first time so I was not aware of this thing. Even I also 
tried to undo that thing but It didn't work.

Sorry for any inconvenience. 

> Display sstable count per level according to repair status on nodetool 
> tablestats 
> --
>
> Key: CASSANDRA-11370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11370
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: lhf
>
> After CASSANDRA-8004 we still display sstables in each level on nodetool 
> tablestats as if we had a single compaction strategy, while we have one 
> strategy for repaired and another for unrepaired data. 
> We should split display into repaired and unrepaired set, so this:
> SSTables in each level: [2, 20/10, 15, 0, 0, 0, 0, 0, 0]
> Would become:
> SSTables in each level (repaired): [1, 10, 0, 0, 0, 0, 0, 0, 0]
> SSTables in each level (unrepaired): [1, 10, 15, 0, 0, 0, 0, 0, 0]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11370) Display sstable count per level according to repair status on nodetool tablestats

2016-03-26 Thread Varun Barala (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212978#comment-15212978
 ] 

Varun Barala commented on CASSANDRA-11370:
--

Robert Stupp It was my first time so I was not aware of this thing. Even I also 
tried to undo that thing but It didn't work.

Sorry for any inconvenience. 

> Display sstable count per level according to repair status on nodetool 
> tablestats 
> --
>
> Key: CASSANDRA-11370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11370
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: lhf
>
> After CASSANDRA-8004 we still display sstables in each level on nodetool 
> tablestats as if we had a single compaction strategy, while we have one 
> strategy for repaired and another for unrepaired data. 
> We should split display into repaired and unrepaired set, so this:
> SSTables in each level: [2, 20/10, 15, 0, 0, 0, 0, 0, 0]
> Would become:
> SSTables in each level (repaired): [1, 10, 0, 0, 0, 0, 0, 0, 0]
> SSTables in each level (unrepaired): [1, 10, 15, 0, 0, 0, 0, 0, 0]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11370) Display sstable count per level according to repair status on nodetool tablestats

2016-03-26 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-11370:
-
Status: Open  (was: Ready to Commit)

> Display sstable count per level according to repair status on nodetool 
> tablestats 
> --
>
> Key: CASSANDRA-11370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11370
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: lhf
>
> After CASSANDRA-8004 we still display sstables in each level on nodetool 
> tablestats as if we had a single compaction strategy, while we have one 
> strategy for repaired and another for unrepaired data. 
> We should split display into repaired and unrepaired set, so this:
> SSTables in each level: [2, 20/10, 15, 0, 0, 0, 0, 0, 0]
> Would become:
> SSTables in each level (repaired): [1, 10, 0, 0, 0, 0, 0, 0, 0]
> SSTables in each level (unrepaired): [1, 10, 15, 0, 0, 0, 0, 0, 0]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11441) Filtering based on partition key feature in bulkLoader utility

2016-03-25 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-11441:
-
Reviewer: Varun Barala

> Filtering based on partition key feature in bulkLoader utility 
> ---
>
> Key: CASSANDRA-11441
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11441
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Varun Barala
>
> This feature will support user to transfer only required part of sstable 
> instead of  entire sstable. 
> Usage :-
> If someone has a CF of composite partition key. Let's say 
> [user(text),id(uuid)].
> Now if he/she only wants to transfer data of some given 'user A' not entire 
> SSTable. 
> So we can add one more parameter in our BulkLoader program 'filtering in 
> partition key'. 
> cmd will look like :- 
> bin/sstableLoader -h  --filtering  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11370) Display sstable count per level according to repair status on nodetool tablestats

2016-03-25 Thread Varun Barala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Barala updated CASSANDRA-11370:
-
Status: Ready to Commit  (was: Patch Available)

> Display sstable count per level according to repair status on nodetool 
> tablestats 
> --
>
> Key: CASSANDRA-11370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11370
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: lhf
>
> After CASSANDRA-8004 we still display sstables in each level on nodetool 
> tablestats as if we had a single compaction strategy, while we have one 
> strategy for repaired and another for unrepaired data. 
> We should split display into repaired and unrepaired set, so this:
> SSTables in each level: [2, 20/10, 15, 0, 0, 0, 0, 0, 0]
> Would become:
> SSTables in each level (repaired): [1, 10, 0, 0, 0, 0, 0, 0, 0]
> SSTables in each level (unrepaired): [1, 10, 15, 0, 0, 0, 0, 0, 0]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11441) Filtering based on partition key feature in bulkLoader utility

2016-03-25 Thread Varun Barala (JIRA)
Varun Barala created CASSANDRA-11441:


 Summary: Filtering based on partition key feature in bulkLoader 
utility 
 Key: CASSANDRA-11441
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11441
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Varun Barala


This feature will support user to transfer only required part of sstable 
instead of  entire sstable. 

Usage :-
If someone has a CF of composite partition key. Let's say [user(text),id(uuid)].
Now if he/she only wants to transfer data of some given 'user A' not entire 
SSTable. 

So we can add one more parameter in our BulkLoader program 'filtering in 
partition key'. 

cmd will look like :- 
bin/sstableLoader -h  --filtering  





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)