[jira] [Commented] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-09-13 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488516#comment-15488516
 ] 

Russell Bradberry commented on CASSANDRA-12367:
---

{quote}
Also by SIZE ON, will it return the size of data the query is returning or size 
on disk?
{quote}

would probably make the most sense as the size of data returned from the query. 
 Size on disk could mean many things, eg. compression etc.

> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk-v2.txt, 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-09-13 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487263#comment-15487263
 ] 

Russell Bradberry commented on CASSANDRA-12367:
---

{quote}
I am not sure how it will work like tracing with SIZE ON? When you issue a 
query after SIZE ON, will it give the size of the query or CQL partition? 
{quote}
In this case it would be the size of the query, if you want the size of a given 
partition then you would run a query specifying only the partition key.

{quote}
Also we will need the size before every read or write. This will cause calling 
SIZE ON and then OFF after every operation.
{quote}

Why?  I was suggesting this for the CQL specific representation, the internal 
representation could still be a JMX call.  If the client needs it for every 
read/write then it would just always be on, just as if you wanted to have the 
trace information for every read/write.  

> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk-v2.txt, 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12629) All Nodes Replication Strategy

2016-09-12 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485139#comment-15485139
 ] 

Russell Bradberry commented on CASSANDRA-12629:
---

would it make sense to name this "EverywhereStrategy" to keep in line with 
other discussions on the subject such as CASSANDRA-826, and I believe that 
there is an EverywhereStrategy in the Enterprise version as well.

> All Nodes Replication Strategy
> --
>
> Key: CASSANDRA-12629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12629
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alwyn Davis
>Priority: Minor
> Fix For: 3.7
>
> Attachments: 12629-3.7.patch
>
>
> When adding a new DC, keyspaces must be manually updated to replicate to the 
> new DC.  This is problematic for system_auth, as it cannot achieve LOCAL_ONE 
> consistency (for a non-cassandra user), until its replication options have 
> been updated on an existing node.
> Ideally, system_auth could be set to an "All Nodes strategy" that will 
> replicate it to all nodes, as they join the cluster.  It also removes the 
> need to update the replication factor for system_auth when adding nodes to 
> the cluster to keep with the recommendation of RF=number of nodes (at least 
> for small clusters).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-09-12 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484582#comment-15484582
 ] 

Russell Bradberry edited comment on CASSANDRA-12367 at 9/12/16 4:40 PM:


I agree with [~thobbs] that it doesn't really belong in CQL directly.  The 
writeTime and ttl meta information in CQL is at the column level and makes 
sense.  What about exposing it in the same way that TRACING is exposed?  where 
setting something like "SIZES ON" will modify the output and can be implemented 
in the clients in a similar fashion

This way, the size of the query can be returned and the user doesn't have to 
modify the query to understand how it is stored.


was (Author: devdazed):
I agree with [~thobbs] that it doesn't really belong in CQL directly.  The 
writeTime and ttl meta information in CQL is at the column level and makes 
sense.  What about exposing it in the same way that TRACING is exposed?  where 
setting something like "SIZES ON" will modify the output and can be implemented 
in the clients in a similar fashion

> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk-v2.txt, 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-09-12 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484582#comment-15484582
 ] 

Russell Bradberry commented on CASSANDRA-12367:
---

I agree with [~thobbs] that it doesn't really belong in CQL directly.  The 
writeTime and ttl meta information in CQL is at the column level and makes 
sense.  What about exposing it in the same way that TRACING is exposed?  where 
setting something like "SIZES ON" will modify the output and can be implemented 
in the clients in a similar fashion

> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk-v2.txt, 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11529) Checking if an unlogged batch is local is inefficient

2016-04-07 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231202#comment-15231202
 ] 

Russell Bradberry commented on CASSANDRA-11529:
---

This is a critical issue for us as our cluster is in a mixed-version state 
where we have coordinator-only nodes running an older version to compensate for 
this issue.  The impact on a 50 node (8 cores, 256 vnodes) cluster with a few 
thousand batch inserts per second sends the average load to above 120.

> Checking if an unlogged batch is local is inefficient
> -
>
> Key: CASSANDRA-11529
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11529
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Paulo Motta
>
> Based on CASSANDRA-11363 report I noticed that on CASSANDRA-9303 we 
> introduced the following check to avoid printing a {{WARN}} in case an 
> unlogged batch statement is local:
> {noformat}
>  for (IMutation im : mutations)
>  {
>  keySet.add(im.key());
>  for (ColumnFamily cf : im.getColumnFamilies())
>  ksCfPairs.add(String.format("%s.%s", 
> cf.metadata().ksName, cf.metadata().cfName));
> +
> +if (localMutationsOnly)
> +localMutationsOnly &= isMutationLocal(localTokensByKs, 
> im);
>  }
>  
> +// CASSANDRA-9303: If we only have local mutations we do not warn
> +if (localMutationsOnly)
> +return;
> +
>  NoSpamLogger.log(logger, NoSpamLogger.Level.WARN, 1, 
> TimeUnit.MINUTES, unloggedBatchWarning,
>   keySet.size(), keySet.size() == 1 ? "" : "s",
>   ksCfPairs.size() == 1 ? "" : "s", ksCfPairs);
> {noformat}
> The {{isMutationLocal}} check uses 
> {{StorageService.instance.getLocalRanges(mutation.getKeyspaceName())}}, which 
> underneaths uses {{AbstractReplication.getAddressRanges}} to calculate local 
> ranges. 
> Recalculating this at every unlogged batch can be pretty inefficient, so we 
> should at the very least cache it every time the ring changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-07 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231130#comment-15231130
 ] 

Russell Bradberry commented on CASSANDRA-11363:
---

[~pauloricardomg] 11529 makes sense because CASSANDRA-9303 was backported to 
2.1.12 in DSE 4.8.4. Hence why we see it in that version vs only in 2.1.13.

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-07 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231130#comment-15231130
 ] 

Russell Bradberry edited comment on CASSANDRA-11363 at 4/7/16 9:42 PM:
---

[~pauloricardomg] CASSANDRA-11529 makes sense because CASSANDRA-9303 was 
backported to 2.1.12 in DSE 4.8.4. Hence why we see it in that version vs only 
in 2.1.13.


was (Author: devdazed):
[~pauloricardomg] 11529 makes sense because CASSANDRA-9303 was backported to 
2.1.12 in DSE 4.8.4. Hence why we see it in that version vs only in 2.1.13.

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-07 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231115#comment-15231115
 ] 

Russell Bradberry commented on CASSANDRA-11363:
---

any chance the number of vnodes in a cluster affects how bad this issue is?

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-07 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231108#comment-15231108
 ] 

Russell Bradberry commented on CASSANDRA-11363:
---

[~pauloricardomg] yes, we are using unlogged batches that cross partitions

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-07 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231066#comment-15231066
 ] 

Russell Bradberry commented on CASSANDRA-11363:
---

well, it may be possible that this is the same issue, just very much 
exacerbated by the batches

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-07 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231045#comment-15231045
 ] 

Russell Bradberry edited comment on CASSANDRA-11363 at 4/7/16 8:56 PM:
---

we may have two separate issues here, in mine, the issue is 100% CPU 
utilization and ultra high load when using batches.  According to the jfr all 
of hot threads are spinning on 
{code}
org.apache.cassandra.locator.NetworkTopologyStrategy.hasSufficientReplicas(String,
 Map, Multimap)
-> 
org.apache.cassandra.locator.NetworkTopologyStrategy.hasSufficientReplicas(Map, 
Multimap)
-> 
org.apache.cassandra.locator.NetworkTopologyStrategy.calculateNaturalEndpoints(Token,
 TokenMetadata)
{code}


was (Author: devdazed):
we may have two separate issues here, in mine, the issue is 100% CPU 
utilization and ultra high load when using batches.  According to the jfr all 
of hot threads are putting their resources in 
{code}
org.apache.cassandra.locator.NetworkTopologyStrategy.hasSufficientReplicas(String,
 Map, Multimap)
-> 
org.apache.cassandra.locator.NetworkTopologyStrategy.hasSufficientReplicas(Map, 
Multimap)
-> 
org.apache.cassandra.locator.NetworkTopologyStrategy.calculateNaturalEndpoints(Token,
 TokenMetadata)
{code}

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> 

[jira] [Commented] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-07 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231045#comment-15231045
 ] 

Russell Bradberry commented on CASSANDRA-11363:
---

we may have two separate issues here, in mine, the issue is 100% CPU 
utilization and ultra high load when using batches.  According to the jfr all 
of hot threads are putting their resources in 
{code}
org.apache.cassandra.locator.NetworkTopologyStrategy.hasSufficientReplicas(String,
 Map, Multimap)
-> 
org.apache.cassandra.locator.NetworkTopologyStrategy.hasSufficientReplicas(Map, 
Multimap)
-> 
org.apache.cassandra.locator.NetworkTopologyStrategy.calculateNaturalEndpoints(Token,
 TokenMetadata)
{code}

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-07 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230926#comment-15230926
 ] 

Russell Bradberry commented on CASSANDRA-11363:
---

Unfortunately we are on DSE, so I can't run the revert

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-03-20 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199757#comment-15199757
 ] 

Russell Bradberry commented on CASSANDRA-11363:
---

After some digging, we found the issue to be introduce in Cassandra 2.1.12.  We 
are running DSE so the issue manifested in DSE 4.8.4

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-03-19 Thread Russell Bradberry (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Bradberry updated CASSANDRA-11363:
--
Comment: was deleted

(was: After some digging, we found the issue to be introduce in Cassandra 
2.1.12.  We are running DSE so the issue manifested in DSE 4.8.4)

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-03-19 Thread Russell Bradberry (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Bradberry updated CASSANDRA-11363:
--
Reproduced In: 2.1.13, 2.1.12  (was: 2.1.13)

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-03-19 Thread Russell Bradberry (JIRA)
Russell Bradberry created CASSANDRA-11363:
-

 Summary: Blocked NTR When Connecting Causing Excessive Load
 Key: CASSANDRA-11363
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
 Project: Cassandra
  Issue Type: Bug
  Components: Coordination
Reporter: Russell Bradberry
 Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack

When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
machine load increases to very high levels (> 120 on an 8 core machine) and 
native transport requests get blocked in tpstats.

I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.

The issue does not seem to affect the nodes running 2.1.9.

The issue seems to coincide with the number of connections OR the number of 
total requests being processed at a given time (as the latter increases with 
the former in our system)

Currently there is between 600 and 800 client connections on each machine and 
each machine is handling roughly 2000-3000 client requests per second.

Disabling the binary protocol fixes the issue for this node but isn't a viable 
option cluster-wide.

Here is the output from tpstats:

{code}
Pool NameActive   Pending  Completed   Blocked  All 
time blocked
MutationStage 0 88387821 0  
   0
ReadStage 0 0 355860 0  
   0
RequestResponseStage  0 72532457 0  
   0
ReadRepairStage   0 0150 0  
   0
CounterMutationStage 32   104 897560 0  
   0
MiscStage 0 0  0 0  
   0
HintedHandoff 0 0 65 0  
   0
GossipStage   0 0   2338 0  
   0
CacheCleanupExecutor  0 0  0 0  
   0
InternalResponseStage 0 0  0 0  
   0
CommitLogArchiver 0 0  0 0  
   0
CompactionExecutor2   190474 0  
   0
ValidationExecutor0 0  0 0  
   0
MigrationStage0 0 10 0  
   0
AntiEntropyStage  0 0  0 0  
   0
PendingRangeCalculator0 0310 0  
   0
Sampler   0 0  0 0  
   0
MemtableFlushWriter   110 94 0  
   0
MemtablePostFlush 134257 0  
   0
MemtableReclaimMemory 0 0 94 0  
   0
Native-Transport-Requests   128   156 38795716  
  278451

Message type   Dropped
READ 0
RANGE_SLICE  0
_TRACE   0
MUTATION 0
COUNTER_MUTATION 0
BINARY   0
REQUEST_RESPONSE 0
PAGED_RANGE  0
READ_REPAIR  0
{code}

Attached is the jstack output for both CMS and G1GC.

Flight recordings are here:
https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr

It is interesting to note that while the flight recording was taking place, the 
load on the machine went back to healthy, and when the flight recording 
finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-03-18 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199929#comment-15199929
 ] 

Russell Bradberry commented on CASSANDRA-11363:
---

For troubleshooting we set up a coordinator-only node and pointed one app 
server at it.  This resulted in roughly 90 connections to the node.  We 
witnessed many timeouts of requests from the app server's perspective. We 
downgraded the coordinator-only node to 2.1.9 and upgraded point-release by 
point-release (DSE point releases) until we saw the same behavior in DSE 4.8.4 
(Cassandra 2.1.12).

I'm not certain this has to do with connection count anymore. 

We have several different types of workloads going on, but we found that only 
the workloads that use batches were timing out. Additionally this is only 
happening when the node is used as a coordinator.  We are not seeing this issue 
when we disable the binary protocol, effectively making the node no-longer a 
coordinator.

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight 

[jira] [Comment Edited] (CASSANDRA-7464) Replace sstable2json and json2sstable

2015-12-29 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15074017#comment-15074017
 ] 

Russell Bradberry edited comment on CASSANDRA-7464 at 12/29/15 4:27 PM:


I would like to see an option to have an output method that is more digestible 
by scripts.  The old sstable2json and currently this one, output the entire 
SSTable as a single array that is pretty-formatted.  This is great for visually 
looking at it but requires the loading of an entire SSTable into memory before 
JSON parsing it.  There are tools that attempt to read a large JSON stream and 
emit objects as they are complete, but these are rather cumbersome and 
difficult to use, also tend to be different from language to language.

What I would propose is to have a command line option that will output one 
partition per line (escaping any newlines encountered) without any leading 
trailing brackets or commas.  This will allow for an application to be able to 
read one partition at a time and work on it in a streaming fashion.

I also put my thoughts on this in this github issue: 
https://github.com/tolbertam/sstable-tools/issues/19


was (Author: devdazed):
I would like to see an option to have an output method that is more digestible 
by scripts.  The old sstable2json and currently this one, output the entire 
SSTable as a single array that is pretty-formatted.  This is great for visually 
looking at it but requires the loading of an entire SSTable into memory before 
JSON parsing it.  There are tools that attempt to read a large JSON stream and 
emit objects as they are complete, but these are rather cumbersome and 
difficult to use, also tend to be different fromm language to language.

What I would propose is to have a command line option that will output one 
partition per line (escaping any newlines encountered) without any leading 
trailing brackets or commas.  This will allow for an application to be able to 
read one partition at a time and work on it in a streaming fashion.

I also put my thoughts on this in this github issue: 
https://github.com/tolbertam/sstable-tools/issues/19

> Replace sstable2json and json2sstable
> -
>
> Key: CASSANDRA-7464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7464
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 3.x
>
> Attachments: sstable-only.patch
>
>
> Both tools are pretty awful. They are primarily meant for debugging (there is 
> much more efficient and convenient ways to do import/export data), but their 
> output manage to be hard to handle both for humans and for tools (especially 
> as soon as you have modern stuff like composites).
> There is value to having tools to export sstable contents into a format that 
> is easy to manipulate by human and tools for debugging, small hacks and 
> general tinkering, but sstable2json and json2sstable are not that.  
> So I propose that we deprecate those tools and consider writing better 
> replacements. It shouldn't be too hard to come up with an output format that 
> is more aware of modern concepts like composites, UDTs, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7464) Replace sstable2json and json2sstable

2015-12-29 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15074017#comment-15074017
 ] 

Russell Bradberry edited comment on CASSANDRA-7464 at 12/29/15 3:54 PM:


I would like to see an option to have an output method that is more digestible 
by scripts.  The old sstable2json and currently this one, output the entire 
SSTable as a single array that is pretty-formatted.  This is great for visually 
looking at it but requires the loading of an entire SSTable into memory before 
JSON parsing it.  There are tools that attempt to read a large JSON stream and 
emit objects as they are complete, but these are rather cumbersome and 
difficult to use, also tend to be different fromm language to language.

What I would propose is to have a command line option that will output one 
partition per line (escaping any newlines encountered) without any leading 
trailing brackets or commas.  This will allow for an application to be able to 
read one partition at a time and work on it in a streaming fashion.

I also put my thoughts on this in this github issue: 
https://github.com/tolbertam/sstable-tools/issues/19


was (Author: devdazed):
Personally I would like to see an option to have an output method that is more 
digestible by scripts.  The old sstable2json and currently this one, output the 
entire SSTable as a single array that is pretty-formatted.  This is great for 
visually looking at it but requires the loading of an entire SSTable into 
memory before JSON parsing it.  There are tools that attempt to read a large 
JSON stream and emit objects as they are complete, but these are rather 
cumbersome and difficult to use, also tend to be different form language to 
language.

What I would propose is to have a command line option that will output one 
partition per line (escaping any newlines encountered) without any leading 
trailing brackets or commas.  This will allow for an application to be able to 
read one partition at a time and work on it in a streaming fashion.

I also put my thoughts on this in this github issue: 
https://github.com/tolbertam/sstable-tools/issues/19

> Replace sstable2json and json2sstable
> -
>
> Key: CASSANDRA-7464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7464
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 3.x
>
> Attachments: sstable-only.patch
>
>
> Both tools are pretty awful. They are primarily meant for debugging (there is 
> much more efficient and convenient ways to do import/export data), but their 
> output manage to be hard to handle both for humans and for tools (especially 
> as soon as you have modern stuff like composites).
> There is value to having tools to export sstable contents into a format that 
> is easy to manipulate by human and tools for debugging, small hacks and 
> general tinkering, but sstable2json and json2sstable are not that.  
> So I propose that we deprecate those tools and consider writing better 
> replacements. It shouldn't be too hard to come up with an output format that 
> is more aware of modern concepts like composites, UDTs, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7464) Replace sstable2json and json2sstable

2015-12-29 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15074017#comment-15074017
 ] 

Russell Bradberry commented on CASSANDRA-7464:
--

Personally I would like to see an option to have an output method that is more 
digestible by scripts.  The old sstable2json and currently this one, output the 
entire SSTable as a single array that is pretty-formatted.  This is great for 
visually looking at it but requires the loading of an entire SSTable into 
memory before JSON parsing it.  There are tools that attempt to read a large 
JSON stream and emit objects as they are complete, but these are rather 
cumbersome and difficult to use, also tend to be different form language to 
language.

What I would propose is to have a command line option that will output one 
partition per line (escaping any newlines encountered) without any leading 
trailing brackets or commas.  This will allow for an application to be able to 
read one partition at a time and work on it in a streaming fashion.

I also put my thoughts on this in this github issue: 
https://github.com/tolbertam/sstable-tools/issues/19

> Replace sstable2json and json2sstable
> -
>
> Key: CASSANDRA-7464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7464
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 3.x
>
> Attachments: sstable-only.patch
>
>
> Both tools are pretty awful. They are primarily meant for debugging (there is 
> much more efficient and convenient ways to do import/export data), but their 
> output manage to be hard to handle both for humans and for tools (especially 
> as soon as you have modern stuff like composites).
> There is value to having tools to export sstable contents into a format that 
> is easy to manipulate by human and tools for debugging, small hacks and 
> general tinkering, but sstable2json and json2sstable are not that.  
> So I propose that we deprecate those tools and consider writing better 
> replacements. It shouldn't be too hard to come up with an output format that 
> is more aware of modern concepts like composites, UDTs, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10070) Automatic repair scheduling

2015-12-07 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045293#comment-15045293
 ] 

Russell Bradberry commented on CASSANDRA-10070:
---

While it may intuitively seem like you want to kick-off a repair as soon as a 
node comes back online, it can be very dangerous in a production environment. 

Starting the most resource intensive process on a node that is already 
problematic, in a cluster that is already having issues can exacerbate the 
issue and lead to a longer outage, or degradation, than anticipated.  

Network reliability is also another aspect of this.  Lets say you have 3 nodes, 
RF=3 and there is a partition dividing node A and node B.  All nodes are still 
actually, up, but in this case node A will start a repair on B and B will start 
a repair on A.  Now 2/3 of your cluster is un-needly repairing which can cause 
serious performance problems, especially when running a loaded cluster.

Also:
Other times you might not want a repair automatically started:

 - The cluster is in the middle of a rolling upgrade where streaming is broken 
between versions.  
 - Heavily loaded clusters during normal operation (some users schedule repairs 
at night to not affect performance during normal hours of operation)
 - Clusters where the read-consistency is high enough to account for the hints 
beyond the window allowing the user to schedule the repair for a time that 
makes sense for their cluster and use-case.





> Automatic repair scheduling
> ---
>
> Key: CASSANDRA-10070
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10070
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
>Priority: Minor
> Fix For: 3.x
>
>
> Scheduling and running repairs in a Cassandra cluster is most often a 
> required task, but this can both be hard for new users and it also requires a 
> bit of manual configuration. There are good tools out there that can be used 
> to simplify things, but wouldn't this be a good feature to have inside of 
> Cassandra? To automatically schedule and run repairs, so that when you start 
> up your cluster it basically maintains itself in terms of normal 
> anti-entropy, with the possibility for manual configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7464) Replace sstable2json and json2sstable

2015-12-03 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15038339#comment-15038339
 ] 

Russell Bradberry commented on CASSANDRA-7464:
--

It is absolutely insane that a perfectly working, albeit not the greatest, 
troubleshooting tool was removed and not replaced with anything. We now have no 
way at all to look into the SSTables. This makes troubleshooting production 
problems incredibly difficult. I am curious as to why enough consideration 
wasn't given to hold off the removal of the tool until the new one was ready to 
go.

> Replace sstable2json and json2sstable
> -
>
> Key: CASSANDRA-7464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7464
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Priority: Minor
> Fix For: 3.x
>
>
> Both tools are pretty awful. They are primarily meant for debugging (there is 
> much more efficient and convenient ways to do import/export data), but their 
> output manage to be hard to handle both for humans and for tools (especially 
> as soon as you have modern stuff like composites).
> There is value to having tools to export sstable contents into a format that 
> is easy to manipulate by human and tools for debugging, small hacks and 
> general tinkering, but sstable2json and json2sstable are not that.  
> So I propose that we deprecate those tools and consider writing better 
> replacements. It shouldn't be too hard to come up with an output format that 
> is more aware of modern concepts like composites, UDTs, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-9618) Consider deprecating sstable2json/json2sstable in 2.2

2015-12-03 Thread Russell Bradberry (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Bradberry updated CASSANDRA-9618:
-
Comment: was deleted

(was: It is absolutely insane that a perfectly working, albeit not the 
greatest, troubleshooting tool was removed and not replaced with anything.  We 
now have no way at all to look into the SSTables.  This makes troubleshooting 
production problems incredibly difficult.  I am curious as to why enough 
consideration wasn't given to hold off the removal of the tool until the new 
one was ready to go. )

> Consider deprecating sstable2json/json2sstable in 2.2
> -
>
> Key: CASSANDRA-9618
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9618
> Project: Cassandra
>  Issue Type: Task
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 2.2.0 rc2
>
> Attachments: 0001-Deprecate-sstable2json-and-json2sstable.patch
>
>
> The rational is explained in CASSANDRA-7464 but to rephrase a bit:
> * json2sstable is pretty much useless, {{CQLSSTableWriter}} is way more 
> flexible if you need to write sstable directly.
> * sstable2json is really only potentially useful for debugging, but it's 
> pretty bad at that (it's output is not really all that helpul in modern 
> Cassandra in particular).
> Now, it happens that updating those tool for CASSANDRA-8099, while possible, 
> is a bit involved. So I don't think it make sense to invest effort in 
> maintain these tools. So I propose to deprecate these in 2.2 with removal in 
> 3.0.
> I'll note that having a tool to help debugging sstable can be useful, but I 
> propose to add a tool for that purpose with CASSANDRA-7464.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9618) Consider deprecating sstable2json/json2sstable in 2.2

2015-12-03 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15038308#comment-15038308
 ] 

Russell Bradberry commented on CASSANDRA-9618:
--

It is absolutely insane that a perfectly working, albeit not the greatest, 
troubleshooting tool was removed and not replaced with anything.  We now have 
no way at all to look into the SSTables.  This makes troubleshooting production 
problems incredibly difficult.  I am curious as to why enough consideration 
wasn't given to hold off the removal of the tool until the new one was ready to 
go. 

> Consider deprecating sstable2json/json2sstable in 2.2
> -
>
> Key: CASSANDRA-9618
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9618
> Project: Cassandra
>  Issue Type: Task
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 2.2.0 rc2
>
> Attachments: 0001-Deprecate-sstable2json-and-json2sstable.patch
>
>
> The rational is explained in CASSANDRA-7464 but to rephrase a bit:
> * json2sstable is pretty much useless, {{CQLSSTableWriter}} is way more 
> flexible if you need to write sstable directly.
> * sstable2json is really only potentially useful for debugging, but it's 
> pretty bad at that (it's output is not really all that helpul in modern 
> Cassandra in particular).
> Now, it happens that updating those tool for CASSANDRA-8099, while possible, 
> is a bit involved. So I don't think it make sense to invest effort in 
> maintain these tools. So I propose to deprecate these in 2.2 with removal in 
> 3.0.
> I'll note that having a tool to help debugging sstable can be useful, but I 
> propose to add a tool for that purpose with CASSANDRA-7464.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10619) disallow streaming operations while upgrading

2015-11-12 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002699#comment-15002699
 ] 

Russell Bradberry commented on CASSANDRA-10619:
---

So you are trying to prevent streaming only between nodes that are of 
incompatible versions?  Rather than, what I inferred as, disabling streaming 
cluster-wide until all nodes have been upgraded.

> disallow streaming operations while upgrading
> -
>
> Key: CASSANDRA-10619
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10619
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jon Haddad
>
> Cassandra should prevent users from doing streaming operations in the middle 
> of a cluster upgrade.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10619) disallow streaming operations while upgrading

2015-11-12 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002467#comment-15002467
 ] 

Russell Bradberry edited comment on CASSANDRA-10619 at 11/12/15 5:23 PM:
-

How will the cluster know if it is in the middle of an upgrade?  If all nodes 
aren't reporting the same version?  Also, what about very large clusters that 
may take weeks (months?) to upgrade. Is it really ideal to disallow any 
streaming for that length of time?


was (Author: devdazed):
Ho will the cluster know if it is in the middle of an upgrade?  If all nodes 
aren't reporting the same version?  Also, what about very large clusters that 
may take weeks (months?) to upgrade. Is it really ideal to disallow any 
streaming for that length of time?

> disallow streaming operations while upgrading
> -
>
> Key: CASSANDRA-10619
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10619
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jon Haddad
>
> Cassandra should prevent users from doing streaming operations in the middle 
> of a cluster upgrade.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10619) disallow streaming operations while upgrading

2015-11-12 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002467#comment-15002467
 ] 

Russell Bradberry commented on CASSANDRA-10619:
---

Ho will the cluster know if it is in the middle of an upgrade?  If all nodes 
aren't reporting the same version?  Also, what about very large clusters that 
may take weeks (months?) to upgrade. Is it really ideal to disallow any 
streaming for that length of time?

> disallow streaming operations while upgrading
> -
>
> Key: CASSANDRA-10619
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10619
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jon Haddad
>
> Cassandra should prevent users from doing streaming operations in the middle 
> of a cluster upgrade.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10047) nodetool aborts when attempting to cleanup a keyspace with no ranges

2015-08-11 Thread Russell Bradberry (JIRA)
Russell Bradberry created CASSANDRA-10047:
-

 Summary: nodetool aborts when attempting to cleanup a keyspace 
with no ranges
 Key: CASSANDRA-10047
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10047
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.1.8
Reporter: Russell Bradberry
Priority: Critical


When running nodetool cleanup in a DC that has no ranges for a keyspace, 
nodetool will abort with the following message when attempting to cleanup that 
keyspace:



{{root@analytics-004:~# nodetool cleanup}}
{{Aborted cleaning up atleast one column family in keyspace ks, check server 
logs for more information.}}
{{error: nodetool failed, check server logs}}
{{-- StackTrace --}}
{{java.lang.RuntimeException: nodetool failed, check server logs}} 
{{at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)}}
{{at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)}}


The error messages in the logs are :

{{CompactionManager.java:370 - Cleanup cannot run before a node has joined the 
ring}}


This behavior prevents subsequent keyspaces from getting cleaned up. The error 
message is also misleading as it suggests that the only reason  a node may not 
have ranges for a keyspace is because it has yet to join the ring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10047) nodetool aborts when attempting to cleanup a keyspace with no ranges

2015-08-11 Thread Russell Bradberry (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Bradberry updated CASSANDRA-10047:
--
Description: 
When running nodetool cleanup in a DC that has no ranges for a keyspace, 
nodetool will abort with the following message when attempting to cleanup that 
keyspace:

{code}
Aborted cleaning up atleast one column family in keyspace ks, check server logs 
for more information.
error: nodetool failed, check server logs
-- StackTrace --
java.lang.RuntimeException: nodetool failed, check server logs
at 
org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)
{code}

The error messages in the logs are :

{code}
CompactionManager.java:370 - Cleanup cannot run before a node has joined the 
ring
{code}


This behavior prevents subsequent keyspaces from getting cleaned up. The error 
message is also misleading as it suggests that the only reason  a node may not 
have ranges for a keyspace is because it has yet to join the ring.

  was:
When running nodetool cleanup in a DC that has no ranges for a keyspace, 
nodetool will abort with the following message when attempting to cleanup that 
keyspace:



{{root@analytics-004:~# nodetool cleanup}}
{{Aborted cleaning up atleast one column family in keyspace ks, check server 
logs for more information.}}
{{error: nodetool failed, check server logs}}
{{-- StackTrace --}}
{{java.lang.RuntimeException: nodetool failed, check server logs}} 
{{at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)}}
{{at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)}}


The error messages in the logs are :

{{CompactionManager.java:370 - Cleanup cannot run before a node has joined the 
ring}}


This behavior prevents subsequent keyspaces from getting cleaned up. The error 
message is also misleading as it suggests that the only reason  a node may not 
have ranges for a keyspace is because it has yet to join the ring.


 nodetool aborts when attempting to cleanup a keyspace with no ranges
 

 Key: CASSANDRA-10047
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10047
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.1.8
Reporter: Russell Bradberry
Priority: Critical

 When running nodetool cleanup in a DC that has no ranges for a keyspace, 
 nodetool will abort with the following message when attempting to cleanup 
 that keyspace:
 {code}
 Aborted cleaning up atleast one column family in keyspace ks, check server 
 logs for more information.
 error: nodetool failed, check server logs
 -- StackTrace --
 java.lang.RuntimeException: nodetool failed, check server logs
   at 
 org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)
   at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)
 {code}
 The error messages in the logs are :
 {code}
 CompactionManager.java:370 - Cleanup cannot run before a node has joined the 
 ring
 {code}
 This behavior prevents subsequent keyspaces from getting cleaned up. The 
 error message is also misleading as it suggests that the only reason  a node 
 may not have ranges for a keyspace is because it has yet to join the ring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10047) nodetool aborts when attempting to cleanup a keyspace with no ranges

2015-08-11 Thread Russell Bradberry (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Bradberry updated CASSANDRA-10047:
--
Description: 
When running nodetool cleanup in a DC that has no ranges for a keyspace, 
nodetool will abort with the following message when attempting to cleanup that 
keyspace:

{code}
Aborted cleaning up atleast one column family in keyspace ks, check server logs 
for more information.
error: nodetool failed, check server logs
-- StackTrace --
java.lang.RuntimeException: nodetool failed, check server logs
at 
org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)
{code}

The error messages in the logs are :

{code}
CompactionManager.java:370 - Cleanup cannot run before a node has joined the 
ring
{code}


This behavior prevents subsequent keyspaces from getting cleaned up. The error 
message is also misleading as it suggests that the only reason  a node may not 
have ranges for a keyspace is because it has yet to join the ring.


  was:
When running nodetool cleanup in a DC that has no ranges for a keyspace, 
nodetool will abort with the following message when attempting to cleanup that 
keyspace:

{code}
Aborted cleaning up atleast one column family in keyspace ks, check server logs 
for more information.
error: nodetool failed, check server logs
-- StackTrace --
java.lang.RuntimeException: nodetool failed, check server logs
at 
org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)
{code}

The error messages in the logs are :

{code}
CompactionManager.java:370 - Cleanup cannot run before a node has joined the 
ring
{code}


This behavior prevents subsequent keyspaces from getting cleaned up. The error 
message is also misleading as it suggests that the only reason  a node may not 
have ranges for a keyspace is because it has yet to join the ring.


 nodetool aborts when attempting to cleanup a keyspace with no ranges
 

 Key: CASSANDRA-10047
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10047
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 2.1.8
Reporter: Russell Bradberry
Priority: Minor

 When running nodetool cleanup in a DC that has no ranges for a keyspace, 
 nodetool will abort with the following message when attempting to cleanup 
 that keyspace:
 {code}
 Aborted cleaning up atleast one column family in keyspace ks, check server 
 logs for more information.
 error: nodetool failed, check server logs
 -- StackTrace --
 java.lang.RuntimeException: nodetool failed, check server logs
   at 
 org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)
   at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)
 {code}
 The error messages in the logs are :
 {code}
 CompactionManager.java:370 - Cleanup cannot run before a node has joined the 
 ring
 {code}
 This behavior prevents subsequent keyspaces from getting cleaned up. The 
 error message is also misleading as it suggests that the only reason  a node 
 may not have ranges for a keyspace is because it has yet to join the ring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6846) Provide standard interface for deep application server integration

2014-03-13 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13933319#comment-13933319
 ] 

Russell Bradberry commented on CASSANDRA-6846:
--

[~gdusbabek] I completely ageree, Riak did something similar by separating the 
core distribution from the storage layer allowing people to use components of 
Riak to build a distributed system of their own.  I'm not saying this is the 
right path for C* but modularizability make everything a little easier, it 
would also open the door for more awesome features in DSE, IMO.


 Provide standard interface for deep application server integration
 --

 Key: CASSANDRA-6846
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6846
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Tupshin Harper
Assignee: Tupshin Harper
Priority: Minor
  Labels: (╯°□°)╯︵┻━┻, ponies

 Instead of creating a pluggable interface for Thrift, I'd like to create a 
 pluggable interface for arbitrary app-server deep integration.
 Inspired by both the existence of intravert-ug, as well as there being a long 
 history of various parties embedding tomcat or jetty servlet engines inside 
 Cassandra, I'd like to propose the creation an internal somewhat stable 
 (versioned?) interface that could allow any app server to achieve deep 
 integration with Cassandra, and as a result, these servers could 
 1) host their own apis (REST, for example
 2) extend core functionality by having limited (see triggers and wide row 
 scanners) access to the internals of cassandra
 The hand wavey part comes because while I have been mulling this about for a 
 while, I have not spent any significant time into looking at the actual 
 surface area of intravert-ug's integration. But, using it as a model, and 
 also keeping in minds the general needs of your more traditional servlet/j2ee 
 containers, I believe we could come up with a reasonable interface to allow 
 any jvm app server to be integrated and maintained in or out of the Cassandra 
 tree.
 This would satisfy the needs that many of us (Both Ed and I, for example) to 
 have a much greater degree of control over server side execution, and to be 
 able to start building much more interestingly (and simply) tiered 
 applications.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6846) Provide standard interface for deep application server integration

2014-03-13 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13933380#comment-13933380
 ] 

Russell Bradberry commented on CASSANDRA-6846:
--

[~appodictic] What I'm saying is that if the project were componetized then 
there wouldn't be a need to put it INSIDE Cassandra.  It would just connect 
using the available interface.  Want to add SOLR? Just drop a JAR, want to add 
Hadoop? Drop a JAR.  etc. etc.  Rather than having a custom built completely 
different project.

 Provide standard interface for deep application server integration
 --

 Key: CASSANDRA-6846
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6846
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Tupshin Harper
Assignee: Tupshin Harper
Priority: Minor
  Labels: (╯°□°)╯︵┻━┻, ponies

 Instead of creating a pluggable interface for Thrift, I'd like to create a 
 pluggable interface for arbitrary app-server deep integration.
 Inspired by both the existence of intravert-ug, as well as there being a long 
 history of various parties embedding tomcat or jetty servlet engines inside 
 Cassandra, I'd like to propose the creation an internal somewhat stable 
 (versioned?) interface that could allow any app server to achieve deep 
 integration with Cassandra, and as a result, these servers could 
 1) host their own apis (REST, for example
 2) extend core functionality by having limited (see triggers and wide row 
 scanners) access to the internals of cassandra
 The hand wavey part comes because while I have been mulling this about for a 
 while, I have not spent any significant time into looking at the actual 
 surface area of intravert-ug's integration. But, using it as a model, and 
 also keeping in minds the general needs of your more traditional servlet/j2ee 
 containers, I believe we could come up with a reasonable interface to allow 
 any jvm app server to be integrated and maintained in or out of the Cassandra 
 tree.
 This would satisfy the needs that many of us (Both Ed and I, for example) to 
 have a much greater degree of control over server side execution, and to be 
 able to start building much more interestingly (and simply) tiered 
 applications.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6846) Provide standard interface for deep application server integration

2014-03-13 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13933893#comment-13933893
 ] 

Russell Bradberry commented on CASSANDRA-6846:
--

[~appodictic]
{quote}
To prevent vetos from being used capriciously, they must be accompanied by a 
technical justification showing why the change is bad (opens a security 
exposure, negatively affects performance, etc. ). A veto without a 
justification is invalid and has no weight.
{quote}

The veto was accompanied only by an opinion, not a technical justification.

 Provide standard interface for deep application server integration
 --

 Key: CASSANDRA-6846
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6846
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Tupshin Harper
Assignee: Tupshin Harper
Priority: Minor
  Labels: ponies

 Instead of creating a pluggable interface for Thrift, I'd like to create a 
 pluggable interface for arbitrary app-server deep integration.
 Inspired by both the existence of intravert-ug, as well as there being a long 
 history of various parties embedding tomcat or jetty servlet engines inside 
 Cassandra, I'd like to propose the creation an internal somewhat stable 
 (versioned?) interface that could allow any app server to achieve deep 
 integration with Cassandra, and as a result, these servers could 
 1) host their own apis (REST, for example
 2) extend core functionality by having limited (see triggers and wide row 
 scanners) access to the internals of cassandra
 The hand wavey part comes because while I have been mulling this about for a 
 while, I have not spent any significant time into looking at the actual 
 surface area of intravert-ug's integration. But, using it as a model, and 
 also keeping in minds the general needs of your more traditional servlet/j2ee 
 containers, I believe we could come up with a reasonable interface to allow 
 any jvm app server to be integrated and maintained in or out of the Cassandra 
 tree.
 This would satisfy the needs that many of us (Both Ed and I, for example) to 
 have a much greater degree of control over server side execution, and to be 
 able to start building much more interestingly (and simply) tiered 
 applications.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6846) Provide standard interface for deep application server integration

2014-03-12 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931951#comment-13931951
 ] 

Russell Bradberry commented on CASSANDRA-6846:
--

:+1: I'd like to take it one step further and even make parts of Cassandra, 
like CQL, use the interface as well.  Something of eating one's own dog food.  
That way the interface will grow with the features that are added to things 
like CQL and it won't be a constant battle of Feature X was added to CQL can 
we please get it exposed in the interface

 Provide standard interface for deep application server integration
 --

 Key: CASSANDRA-6846
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6846
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Tupshin Harper
Assignee: Tupshin Harper
 Fix For: 3.0


 Instead of creating a pluggable interface for Thrift, I'd like to create a 
 pluggable interface for arbitrary app-server deep integration.
 Inspired by both the existence of intravert-ug, as well as there being a long 
 history of various parties embedding tomcat or jetty servlet engines inside 
 Cassandra, I'd like to propose the creation an internal somewhat stable 
 (versioned?) interface that could allow any app server to achieve deep 
 integration with Cassandra, and as a result, these servers could 
 1) host their own apis (REST, for example
 2) extend core functionality by having limited (see triggers and wide row 
 scanners) access to the internals of cassandra
 The hand wavey part comes because while I have been mulling this about for a 
 while, I have not spent any significant time into looking at the actual 
 surface area of intravert-ug's integration. But, using it as a model, and 
 also keeping in minds the general needs of your more traditional servlet/j2ee 
 containers, I believe we could come up with a reasonable interface to allow 
 any jvm app server to be integrated and maintained in or out of the Cassandra 
 tree.
 This would satisfy the needs that many of us (Both Ed and I, for example) to 
 have a much greater degree of control over server side execution, and to be 
 able to start building much more interestingly (and simply) tiered 
 applications.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6846) Provide standard interface for deep application server integration

2014-03-12 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931951#comment-13931951
 ] 

Russell Bradberry edited comment on CASSANDRA-6846 at 3/12/14 4:34 PM:
---

+1 I'd like to take it one step further and even make parts of Cassandra, like 
CQL, use the interface as well.  Something of eating one's own dog food.  That 
way the interface will grow with the features that are added to things like CQL 
and it won't be a constant battle of Feature X was added to CQL can we please 
get it exposed in the interface


was (Author: devdazed):
:+1: I'd like to take it one step further and even make parts of Cassandra, 
like CQL, use the interface as well.  Something of eating one's own dog food.  
That way the interface will grow with the features that are added to things 
like CQL and it won't be a constant battle of Feature X was added to CQL can 
we please get it exposed in the interface

 Provide standard interface for deep application server integration
 --

 Key: CASSANDRA-6846
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6846
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Tupshin Harper
Assignee: Tupshin Harper
 Fix For: 3.0


 Instead of creating a pluggable interface for Thrift, I'd like to create a 
 pluggable interface for arbitrary app-server deep integration.
 Inspired by both the existence of intravert-ug, as well as there being a long 
 history of various parties embedding tomcat or jetty servlet engines inside 
 Cassandra, I'd like to propose the creation an internal somewhat stable 
 (versioned?) interface that could allow any app server to achieve deep 
 integration with Cassandra, and as a result, these servers could 
 1) host their own apis (REST, for example
 2) extend core functionality by having limited (see triggers and wide row 
 scanners) access to the internals of cassandra
 The hand wavey part comes because while I have been mulling this about for a 
 while, I have not spent any significant time into looking at the actual 
 surface area of intravert-ug's integration. But, using it as a model, and 
 also keeping in minds the general needs of your more traditional servlet/j2ee 
 containers, I believe we could come up with a reasonable interface to allow 
 any jvm app server to be integrated and maintained in or out of the Cassandra 
 tree.
 This would satisfy the needs that many of us (Both Ed and I, for example) to 
 have a much greater degree of control over server side execution, and to be 
 able to start building much more interestingly (and simply) tiered 
 applications.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-03-10 Thread Russell Bradberry (JIRA)
Russell Bradberry created CASSANDRA-6831:


 Summary: Updates to COMPACT STORAGE tables via cli drop CQL 
information
 Key: CASSANDRA-6831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Russell Bradberry
Priority: Minor


If a COMPACT STORAGE table is altered using the CLI all information about the 
column names reverts to the initial key, column1, column2 namings.  
Additionally, the changes in the columns name will not take effect until the 
Cassandra service is restarted.  This means that the clients using CQL will 
continue to work properly until the service is restarted, at which time they 
will start getting errors about non-existant columns in the table.

When attempting to rename the columns back using ALTER TABLE an error stating 
the column already exists will be raised.  The only way to get it back is to 
ALTER TABLE and change the comment or something, which will bring back all the 
original column names.

This seems to be related to CASSANDRA-6676 and CASSANDRA-6370

In cqlsh
{code}
Connected to cluster1 at 127.0.0.3:9160.
[cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
19.36.2]
Use HELP for help.
cqlsh CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
'replication_factor' : 3 };
cqlsh USE test;
cqlsh:test CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
baz) ) WITH COMPACT STORAGE;
cqlsh:test describe table foo;

CREATE TABLE foo (
  bar text,
  baz text,
  qux text,
  PRIMARY KEY (bar, baz)
) WITH COMPACT STORAGE AND
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  read_repair_chance=0.10 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'SnappyCompressor'};
{code}

Now in cli:
{code}

  Connected to: cluster1 on 127.0.0.3/9160
Welcome to Cassandra CLI version 1.2.15-SNAPSHOT

Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.

[default@unknown] use test;
Authenticated to keyspace: test
[default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
3bf5fa49-5d03-34f0-b46c-6745f7740925
{code}

Now back in cqlsh:
{code}
cqlsh:test describe table foo;

CREATE TABLE foo (
  bar text,
  column1 text,
  value text,
  PRIMARY KEY (bar, column1)
) WITH COMPACT STORAGE AND
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='hey this is a comment' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  read_repair_chance=0.10 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'SnappyCompressor'};

cqlsh:test ALTER TABLE foo WITH comment='this is a new comment';
cqlsh:test describe table foo;

CREATE TABLE foo (
  bar text,
  baz text,
  qux text,
  PRIMARY KEY (bar, baz)
) WITH COMPACT STORAGE AND
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='this is a new comment' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  read_repair_chance=0.10 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'SnappyCompressor'};
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5899) Sends all interface in native protocol notification when rpc_address=0.0.0.0

2013-12-13 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847535#comment-13847535
 ] 

Russell Bradberry commented on CASSANDRA-5899:
--

{quote}
 I am unsure how any single value for broadcast_rpc_address would be useful
{quote}

A very common setup is to have a CNAME address that points to the internal IP 
address when within the DC and an external IP address when outside the DC.  
Setting the broadcast address to this common CNAME would allow clients both 
internal and external to the DC to connect in the same way.

 Sends all interface in native protocol notification when rpc_address=0.0.0.0
 

 Key: CASSANDRA-5899
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5899
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Priority: Minor
 Fix For: 2.1


 For the native protocol notifications, when we send a new node notification, 
 we send the rpc_address of that new node. For this to be actually useful, 
 that address sent should be publicly accessible by the driver it is destined 
 to. 
 The problem is when rpc_address=0.0.0.0. Currently, we send the 
 listen_address, which is correct in the sense that we do are bind on it but 
 might not be accessible by client nodes.
 In fact, one of the good reason to use 0.0.0.0 rpc_address would be if you 
 have a private network for internode communication and another for 
 client-server communinations, but still want to be able to issue query from 
 the private network for debugging. In that case, the current behavior to send 
 listen_address doesn't really help.
 So one suggestion would be to instead send all the addresses on which the 
 (native protocol) server is bound to (which would still leave to the driver 
 the task to pick the right one, but at least it has something to pick from).
 That's relatively trivial to do in practice, but it does require a minor 
 binary protocol break to return a list instead of just one IP, which is why 
 I'm tentatively marking this 2.0. Maybe we can shove that tiny change in the 
 final (in the protocol v2 only)? Povided we agree it's a good idea of course.
 Now to be complete, for the same reasons, we would also need to store all the 
 addresses we are bound to in the peers table. That's also fairly simple and 
 the backward compatibility story is maybe a tad simpler: we could add a new 
 {{rpc_addresses}} column that would be a list and deprecate {{rpc_address}} 
 (to be removed in 2.1 for instance).



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6370) Updating cql created table through cassandra-cli transform it into a compact storage table

2013-11-18 Thread Russell Bradberry (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13825392#comment-13825392
 ] 

Russell Bradberry commented on CASSANDRA-6370:
--

I tend to agree. If there is any unexpected behavior that could arise then it 
should be prevented from happening, a big warning like THIS WILL ALTER YOUR 
TABLE WITH COMPACT STORAGE ... Continue Y/N?, so the user is aware of what is 
happening.  Simply saying it's hidden when you list it is not a solution IMO.

 Updating cql created table through cassandra-cli transform it into a compact 
 storage table
 --

 Key: CASSANDRA-6370
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6370
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Alain RODRIGUEZ
Assignee: Sylvain Lebresne
Priority: Critical

 To reproduce :
 echo CREATE TABLE test (aid int, period text, event text, viewer text, 
 PRIMARY KEY (aid, period, event, viewer) ); | cqlsh -kmykeyspace;
 echo describe table test; | cqlsh -kmykeyspace;
 Output 
 CREATE TABLE test (
   aid int,
   period text,
   event text,
   viewer text,
   PRIMARY KEY (aid, period, event, viewer)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 Then do :
 echo update column family test with dclocal_read_repair_chance = 0.1; | 
 cassandra-cli -kmykeyspace
 And finally again : echo describe table test; | cqlsh -kmykeyspace;
 Output 
 CREATE TABLE test (
   aid int,
   column1 text,
   column2 text,
   column3 text,
   column4 text,
   value blob,
   PRIMARY KEY (aid, column1, column2, column3, column4)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.10 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 This is quite annoying in production. If it is happening to you: 
 UPDATE system.schema_columnfamilies SET column_aliases = 
 '[period,event,viewer]' WHERE keyspace_name='mykeyspace' AND 
 columnfamily_name='test'; should help restoring the table. (Thanks Sylvain 
 for this information.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-5246) Tables Created via CQL Not Visible in CLI

2013-02-12 Thread Russell Bradberry (JIRA)
Russell Bradberry created CASSANDRA-5246:


 Summary: Tables Created via CQL Not Visible in CLI
 Key: CASSANDRA-5246
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5246
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Russell Bradberry


When creating tables in CQL, the tables do not show up in the `show schema` 
command in the cli.

To recreate:

{code}
$ cqlsh -3
Connected to Test Cluster at localhost:9160.
[cqlsh 2.3.0 | Cassandra 1.2.0 | CQL spec 3.0.0 | Thrift protocol 19.35.0]
Use HELP for help.
cqlsh CREATE KEYSPACE my_test WITH replication = { 'class': 
'NetworkTopologyStrategy',  'datacenter1': '1' };
cqlsh USE my_test;
cqlsh:my_test CREATE TABLE lolwut ( col1 text, col2 text, col3 text, PRIMARY 
KEY (col1));
cqlsh:my_test DESCRIBE TABLES;

lolwut

cqlsh:my_test exit

$ cassandra-cli -k my_test;
Connected to: Test Cluster on 127.0.0.1/9160
Welcome to Cassandra CLI version 1.2.1

Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.

[default@my_test] show schema;
create keyspace my_test
  with placement_strategy = 'NetworkTopologyStrategy'
  and strategy_options = {datacenter1 : 1}
  and durable_writes = true;

use my_test;



[default@my_test] list lolwut;
Using default limit of 100
Using default column limit of 100

0 Row Returned.
Elapsed time: 21 msec(s).
[default@my_test] describe lolwut;
ColumnFamily: lolwut
  Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
  Default column value validator: org.apache.cassandra.db.marshal.BytesType
  Columns sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type)
  GC grace seconds: 0
  Compaction min/max thresholds: 0/0
  Read repair chance: 0.0
  DC Local Read repair chance: 0.0
  Replicate on write: false
  Caching: keys_only
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: null
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5246) Tables Created via CQL Not Visible in CLI

2013-02-12 Thread Russell Bradberry (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Bradberry updated CASSANDRA-5246:
-

Description: 
When creating tables in CQL, the tables do not show up in the `show schema` 
command in the cli.

To recreate:

{code}
$ cqlsh -3
Connected to Test Cluster at localhost:9160.
[cqlsh 2.3.0 | Cassandra 1.2.1 | CQL spec 3.0.0 | Thrift protocol 19.35.0]
Use HELP for help.
cqlsh CREATE KEYSPACE my_test WITH replication = { 'class': 
'NetworkTopologyStrategy',  'datacenter1': '1' };
cqlsh USE my_test;
cqlsh:my_test CREATE TABLE lolwut ( col1 text, col2 text, col3 text, PRIMARY 
KEY (col1));
cqlsh:my_test DESCRIBE TABLES;

lolwut

cqlsh:my_test exit

$ cassandra-cli -k my_test;
Connected to: Test Cluster on 127.0.0.1/9160
Welcome to Cassandra CLI version 1.2.1

Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.

[default@my_test] show schema;
create keyspace my_test
  with placement_strategy = 'NetworkTopologyStrategy'
  and strategy_options = {datacenter1 : 1}
  and durable_writes = true;

use my_test;



[default@my_test] list lolwut;
Using default limit of 100
Using default column limit of 100

0 Row Returned.
Elapsed time: 21 msec(s).
[default@my_test] describe lolwut;
ColumnFamily: lolwut
  Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
  Default column value validator: org.apache.cassandra.db.marshal.BytesType
  Columns sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type)
  GC grace seconds: 0
  Compaction min/max thresholds: 0/0
  Read repair chance: 0.0
  DC Local Read repair chance: 0.0
  Replicate on write: false
  Caching: keys_only
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: null
{code}


  was:
When creating tables in CQL, the tables do not show up in the `show schema` 
command in the cli.

To recreate:

{code}
$ cqlsh -3
Connected to Test Cluster at localhost:9160.
[cqlsh 2.3.0 | Cassandra 1.2.0 | CQL spec 3.0.0 | Thrift protocol 19.35.0]
Use HELP for help.
cqlsh CREATE KEYSPACE my_test WITH replication = { 'class': 
'NetworkTopologyStrategy',  'datacenter1': '1' };
cqlsh USE my_test;
cqlsh:my_test CREATE TABLE lolwut ( col1 text, col2 text, col3 text, PRIMARY 
KEY (col1));
cqlsh:my_test DESCRIBE TABLES;

lolwut

cqlsh:my_test exit

$ cassandra-cli -k my_test;
Connected to: Test Cluster on 127.0.0.1/9160
Welcome to Cassandra CLI version 1.2.1

Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.

[default@my_test] show schema;
create keyspace my_test
  with placement_strategy = 'NetworkTopologyStrategy'
  and strategy_options = {datacenter1 : 1}
  and durable_writes = true;

use my_test;



[default@my_test] list lolwut;
Using default limit of 100
Using default column limit of 100

0 Row Returned.
Elapsed time: 21 msec(s).
[default@my_test] describe lolwut;
ColumnFamily: lolwut
  Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
  Default column value validator: org.apache.cassandra.db.marshal.BytesType
  Columns sorted by: 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type)
  GC grace seconds: 0
  Compaction min/max thresholds: 0/0
  Read repair chance: 0.0
  DC Local Read repair chance: 0.0
  Replicate on write: false
  Caching: keys_only
  Bloom Filter FP chance: default
  Built indexes: []
  Compaction Strategy: null
{code}



 Tables Created via CQL Not Visible in CLI
 -

 Key: CASSANDRA-5246
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5246
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Russell Bradberry

 When creating tables in CQL, the tables do not show up in the `show schema` 
 command in the cli.
 To recreate:
 {code}
 $ cqlsh -3
 Connected to Test Cluster at localhost:9160.
 [cqlsh 2.3.0 | Cassandra 1.2.1 | CQL spec 3.0.0 | Thrift protocol 19.35.0]
 Use HELP for help.
 cqlsh CREATE KEYSPACE my_test WITH replication = { 'class': 
 'NetworkTopologyStrategy',  'datacenter1': '1' };
 cqlsh USE my_test;
 cqlsh:my_test CREATE TABLE lolwut ( col1 text, col2 text, col3 text, PRIMARY 
 KEY (col1));
 cqlsh:my_test DESCRIBE TABLES;
 lolwut
 cqlsh:my_test exit
 $ cassandra-cli -k my_test;
 Connected to: Test Cluster on 127.0.0.1/9160
 Welcome to Cassandra CLI version 1.2.1
 Type 'help;' or '?' for help.
 Type 'quit;' or 'exit;' to quit.
 [default@my_test] show schema;
 create keyspace my_test
   with placement_strategy = 'NetworkTopologyStrategy'
   and strategy_options = {datacenter1 : 1}
   and durable_writes = true;
 use my_test;
 [default@my_test] list lolwut;
 Using default limit of 100
 Using default column limit of 100
 0 

[jira] [Created] (CASSANDRA-5194) LIKE Operator in CQL When Creating Column Families

2013-01-28 Thread Russell Bradberry (JIRA)
Russell Bradberry created CASSANDRA-5194:


 Summary: LIKE Operator in CQL When Creating Column Families
 Key: CASSANDRA-5194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5194
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Affects Versions: 1.2.1
Reporter: Russell Bradberry
Priority: Minor


Some RDBMSs have a very convenient feature that allows to create tables that 
look like other tables. 

THe end result should look similar to:
{code}
CREATE TABLE foo LIKE bar;
{code}



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-4842) DateType in Column MetaData causes server crash

2012-10-19 Thread Russell Bradberry (JIRA)
Russell Bradberry created CASSANDRA-4842:


 Summary: DateType in Column MetaData causes server crash
 Key: CASSANDRA-4842
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4842
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1, 1.1.6, 1.1.5
 Environment: All
Reporter: Russell Bradberry


when creating a column family with column metadata containing a date, there is 
a server crash that will prevent startup.

To recreate from the cli:
{code}
create keyspace test;
use test;
create column family foo
  with column_type = 'Standard'
  and comparator = 'CompositeType(LongType,DateType)'
  and default_validation_class = 'UTF8Type'
  and key_validation_class = 'UTF8Type'
  and column_metadata = [ 
{ column_name : '1234:1350695443433', validation_class : BooleanType} 
  ];
{code}

Produces this error in the logs:

{code}
ERROR 21:11:18,795 Error occurred during processing of message.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.cassandra.db.marshal.MarshalException: unable to coerce '2012-10-19 
21' to a  formatted date (long)
at 
org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:373)
at 
org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:194)
at 
org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:141)
at 
org.apache.cassandra.thrift.CassandraServer.system_add_column_family(CassandraServer.java:931)
at 
org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:3410)
at 
org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:3398)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:186)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.util.concurrent.ExecutionException: 
org.apache.cassandra.db.marshal.MarshalException: unable to coerce '2012-10-19 
21' to a  formatted date (long)
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:369)
... 11 more
Caused by: org.apache.cassandra.db.marshal.MarshalException: unable to coerce 
'2012-10-19 21' to a  formatted date (long)
at 
org.apache.cassandra.db.marshal.DateType.dateStringToTimestamp(DateType.java:117)
at org.apache.cassandra.db.marshal.DateType.fromString(DateType.java:85)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:213)
at 
org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:257)
at 
org.apache.cassandra.config.CFMetaData.addColumnDefinitionSchema(CFMetaData.java:1318)
at 
org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1250)
at 
org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:299)
at 
org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:434)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:346)
at 
org.apache.cassandra.service.MigrationManager$1.call(MigrationManager.java:217)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
Caused by: java.text.ParseException: Unable to parse the date: 2012-10-19 21
at org.apache.commons.lang.time.DateUtils.parseDate(DateUtils.java:285)
at 
org.apache.cassandra.db.marshal.DateType.dateStringToTimestamp(DateType.java:113)
... 14 more
ERROR 21:11:18,795 Exception in thread Thread[MigrationStage:1,5,main]
org.apache.cassandra.db.marshal.MarshalException: unable to coerce '2012-10-19 
21' to a  formatted date (long)
at 
org.apache.cassandra.db.marshal.DateType.dateStringToTimestamp(DateType.java:117)
at org.apache.cassandra.db.marshal.DateType.fromString(DateType.java:85)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:213)
at 
org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:257)
at 
org.apache.cassandra.config.CFMetaData.addColumnDefinitionSchema(CFMetaData.java:1318)
at 

[jira] [Updated] (CASSANDRA-4842) DateType in Column MetaData causes server crash

2012-10-19 Thread Russell Bradberry (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Bradberry updated CASSANDRA-4842:
-

Description: 
when creating a column family with column metadata containing a date, there is 
a server crash that will prevent startup.

To recreate from the cli:
{code}
create keyspace test;
use test;
create column family foo
  with column_type = 'Standard'
  and comparator = 'CompositeType(LongType,DateType)'
  and default_validation_class = 'UTF8Type'
  and key_validation_class = 'UTF8Type'
  and column_metadata = [ 
{ column_name : '1234:1350695443433', validation_class : BooleanType} 
  ];
{code}

Produces this error in the logs:

{code}
ERROR 21:11:18,795 Error occurred during processing of message.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.cassandra.db.marshal.MarshalException: unable to coerce '2012-10-19 
21' to a  formatted date (long)
at 
org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:373)
at 
org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:194)
at 
org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:141)
at 
org.apache.cassandra.thrift.CassandraServer.system_add_column_family(CassandraServer.java:931)
at 
org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:3410)
at 
org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:3398)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:186)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.util.concurrent.ExecutionException: 
org.apache.cassandra.db.marshal.MarshalException: unable to coerce '2012-10-19 
21' to a  formatted date (long)
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:369)
... 11 more
Caused by: org.apache.cassandra.db.marshal.MarshalException: unable to coerce 
'2012-10-19 21' to a  formatted date (long)
at 
org.apache.cassandra.db.marshal.DateType.dateStringToTimestamp(DateType.java:117)
at org.apache.cassandra.db.marshal.DateType.fromString(DateType.java:85)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:213)
at 
org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:257)
at 
org.apache.cassandra.config.CFMetaData.addColumnDefinitionSchema(CFMetaData.java:1318)
at 
org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1250)
at 
org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:299)
at 
org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:434)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:346)
at 
org.apache.cassandra.service.MigrationManager$1.call(MigrationManager.java:217)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
... 3 more
Caused by: java.text.ParseException: Unable to parse the date: 2012-10-19 21
at org.apache.commons.lang.time.DateUtils.parseDate(DateUtils.java:285)
at 
org.apache.cassandra.db.marshal.DateType.dateStringToTimestamp(DateType.java:113)
... 14 more
ERROR 21:11:18,795 Exception in thread Thread[MigrationStage:1,5,main]
org.apache.cassandra.db.marshal.MarshalException: unable to coerce '2012-10-19 
21' to a  formatted date (long)
at 
org.apache.cassandra.db.marshal.DateType.dateStringToTimestamp(DateType.java:117)
at org.apache.cassandra.db.marshal.DateType.fromString(DateType.java:85)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:213)
at 
org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:257)
at 
org.apache.cassandra.config.CFMetaData.addColumnDefinitionSchema(CFMetaData.java:1318)
at 
org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1250)
at 
org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:299)
at 
org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:434)
at