[jira] [Created] (CASSANDRA-18103) Allow different compaction strategies per-DC

2022-12-08 Thread Johnny Miller (Jira)
Johnny Miller created CASSANDRA-18103:
-

 Summary: Allow different compaction strategies per-DC
 Key: CASSANDRA-18103
 URL: https://issues.apache.org/jira/browse/CASSANDRA-18103
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller


I have a requirement for deploying an additional DC. The cluster is split 
between multiple DCS - on-prem and a cloud.

Several tables use LCS and perform fine on bare metal and not so well on the 
infrastructure allocated for the cloud DC.

The cloud deployment is intended to run offline analytical batch-type workloads 
where the criticality of read response times does not necessitate LCS. The cost 
of presenting appropriate storage for LCS is high and unnecessary for the 
system requirements or budget.

The JMX call to change compaction locally for testing/migrating compaction, 
unfortunately, does not survive restarts, schema changes etc.

It would be very helpful to indicate on the table what compaction strategy to 
use per dc or make the JMX change durable. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18092) Allow DB role names to prefix with a number

2022-12-05 Thread Johnny Miller (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Miller updated CASSANDRA-18092:
--
Description: 
{{*Works -* CREATE ROLE IF NOT EXISTS test WITH PASSWORD='somepassword' AND 
LOGIN=true;}}
 
{{*Works* - CREATE ROLE IF NOT EXISTS test123 WITH PASSWORD='somepassword' AND 
LOGIN=true;}}
 
{{*Breaks* - CREATE ROLE IF NOT EXISTS 123test WITH PASSWORD='somepassword' AND 
LOGIN=true;}}
{color:#de350b}{{SyntaxException: line 1:26 no viable alternative at input 
'123' (CREATE ROLE IF NOT EXISTS [123]...)}}{color}}}{}}}
 
It would be helpful and more consistent to be able to prefix roles with a 
numeric value instead of only being able to do this as a suffix.

Env Details are:

[cqlsh 6.0.0 | Cassandra 4.0.3 | CQL spec 3.4.5 | Native protocol v5]

 

 

 

 

  was:
{{*Works -* CREATE ROLE IF NOT EXISTS test WITH PASSWORD='somepassword' AND 
LOGIN=true;}}
 
{{*Works* - CREATE ROLE IF NOT EXISTS test123 WITH PASSWORD='somepassword' AND 
LOGIN=true;}}
 
{{*Breaks* - CREATE ROLE IF NOT EXISTS 123test WITH PASSWORD='somepassword' AND 
LOGIN=true;}}
{color:#de350b}{{SyntaxException: line 1:26 no viable alternative at input 
'123' (CREATE ROLE IF NOT EXISTS [123]...)}}{color}{{{}{}}}
 
It would be helpful and more consistent to be able to prefix roles with a 
numeric value instead of only being able to do this as a suffix.

 

 

 


> Allow DB role names to prefix with a number
> ---
>
> Key: CASSANDRA-18092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18092
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Normal
>
> {{*Works -* CREATE ROLE IF NOT EXISTS test WITH PASSWORD='somepassword' AND 
> LOGIN=true;}}
>  
> {{*Works* - CREATE ROLE IF NOT EXISTS test123 WITH PASSWORD='somepassword' 
> AND LOGIN=true;}}
>  
> {{*Breaks* - CREATE ROLE IF NOT EXISTS 123test WITH PASSWORD='somepassword' 
> AND LOGIN=true;}}
> {color:#de350b}{{SyntaxException: line 1:26 no viable alternative at input 
> '123' (CREATE ROLE IF NOT EXISTS [123]...)}}{color}}}{}}}
>  
> It would be helpful and more consistent to be able to prefix roles with a 
> numeric value instead of only being able to do this as a suffix.
> Env Details are:
> [cqlsh 6.0.0 | Cassandra 4.0.3 | CQL spec 3.4.5 | Native protocol v5]
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-18092) Allow DB role names to prefix with a number

2022-12-05 Thread Johnny Miller (Jira)
Johnny Miller created CASSANDRA-18092:
-

 Summary: Allow DB role names to prefix with a number
 Key: CASSANDRA-18092
 URL: https://issues.apache.org/jira/browse/CASSANDRA-18092
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller


{{*Works -* CREATE ROLE IF NOT EXISTS test WITH PASSWORD='somepassword' AND 
LOGIN=true;}}
 
{{*Works* - CREATE ROLE IF NOT EXISTS test123 WITH PASSWORD='somepassword' AND 
LOGIN=true;}}
 
{{*Breaks* - CREATE ROLE IF NOT EXISTS 123test WITH PASSWORD='somepassword' AND 
LOGIN=true;}}
{color:#de350b}{{SyntaxException: line 1:26 no viable alternative at input 
'123' (CREATE ROLE IF NOT EXISTS [123]...)}}{color}{{{}{}}}
 
It would be helpful and more consistent to be able to prefix roles with a 
numeric value instead of only being able to do this as a suffix.

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-17710) Add Driver Version to Cassandra audit logs

2022-06-22 Thread Johnny Miller (Jira)
Johnny Miller created CASSANDRA-17710:
-

 Summary: Add Driver Version to Cassandra audit logs
 Key: CASSANDRA-17710
 URL: https://issues.apache.org/jira/browse/CASSANDRA-17710
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller


When auditing access to Cassandra, it would be helpful to include (if provided) 
the version of the driver being used i.e 
[https://cassandra.apache.org/doc/latest/cassandra/operating/audit_logging.html#what-does-audit-logging-logs]

 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16961) Timestamp String displayed for partition compaction warnings is not correct

2021-09-16 Thread Johnny Miller (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416115#comment-17416115
 ] 

Johnny Miller commented on CASSANDRA-16961:
---

Thanks [~brandon.williams]- much appreciated. I may submit a pull request on 
the docs 
(https://cassandra.apache.org/doc/latest/cassandra/tools/nodetool/getendpoints.html)
 to make this clearer.

> Timestamp String displayed for partition compaction warnings is not correct
> ---
>
> Key: CASSANDRA-16961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16961
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Johnny Miller
>Priority: Normal
>
> When compaction encounters a large partition, it outputs a warning in the 
> logs e.g.:
>  (Apologies, had to redact some information)
> WARN [CompactionExecutor:343] 2021-09-16 09:28:43,539 BigTableWriter.java:211 
> - Writing large partition XXX/:sourceid:{color:#de350b}*2021-09-16 
> 05\:00Z*{color} (1.381GiB) to sstable 
> /mnt/var/lib/cassandra/data/segment/message-336c5ff04db211ebbffc2980407d44d6/md-58982-big-Data.db
> i.e 
> [https://github.com/apache/cassandra/blob/cassandra-3.11.5/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java#L211]
> *Example Table/insert*
> CREATE TABLE myks.mytable (
>  sourceid text,
>  {color:#de350b}*messagehour timestamp,*{color}
>  messagetime timestamp,
>  messageid text
>  PRIMARY KEY ((sourceid, messagehour), messagetime, messageid)
>  ) ;
>  
> insert into myks.mytable (sourceid, messagehour, messagetime, messageid) 
> values ('sourceid', '{color:#de350b}*2021-09-16 05:00Z'*{color}, '2021-09-16 
> 05:00:31Z', '123ABC');
> If I then need to try and work out which nodes in the cluster contain the 
> replica data for this partition (from the logs), I will get the token via CQL
> eg:
>  select distinct token(sourceid,messagehour) from myks.mytable where 
> sourceid='sourceid' and messagehour='{color:#de350b}*2021-09-16 
> 05:00Z*{color}';
> system.token(sourceid, messagehour)
>  -
>  {color:#de350b}*7663675819538124697*{color}
> I then run nodetool to get the endpoints for this token/ks/table
> eg
>  nodetool getendpoints myks mytable 
> {color:#de350b}*7663675819538124697*{color}
>  172.31.10.187
>  172.31.12.193
>  172.31.13.91
> And *the list of endpoints is not correct* as the value outputted in the 
> timestamp warning log entry, I suspect, is missing additional 
> information/precision so obviously will give back the wrong token and hence 
> the wrong endpoints.
> Possibly this warning log statement should output the actual partition key 
> token in addition to the other information to avoid confusion and the string 
> representation of the timestamp be correct.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16961) Timestamp String displayed for partition compaction warnings is not correct

2021-09-16 Thread Johnny Miller (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Miller updated CASSANDRA-16961:
--
Description: 
When compaction encounters a large partition, it outputs a warning in the logs 
e.g.:
 (Apologies, had to redact some information)

WARN [CompactionExecutor:343] 2021-09-16 09:28:43,539 BigTableWriter.java:211 - 
Writing large partition XXX/:sourceid:{color:#de350b}*2021-09-16 
05\:00Z*{color} (1.381GiB) to sstable 
/mnt/var/lib/cassandra/data/segment/message-336c5ff04db211ebbffc2980407d44d6/md-58982-big-Data.db

i.e 
[https://github.com/apache/cassandra/blob/cassandra-3.11.5/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java#L211]

*Example Table/insert*

CREATE TABLE myks.mytable (
 sourceid text,
 {color:#de350b}*messagehour timestamp,*{color}
 messagetime timestamp,
 messageid text
 PRIMARY KEY ((sourceid, messagehour), messagetime, messageid)
 ) ;

 

insert into myks.mytable (sourceid, messagehour, messagetime, messageid) values 
('sourceid', '{color:#de350b}*2021-09-16 05:00Z'*{color}, '2021-09-16 
05:00:31Z', '123ABC');

If I then need to try and work out which nodes in the cluster contain the 
replica data for this partition (from the logs), I will get the token via CQL

eg:
 select distinct token(sourceid,messagehour) from myks.mytable where 
sourceid='sourceid' and messagehour='{color:#de350b}*2021-09-16 05:00Z*{color}';

system.token(sourceid, messagehour)
 -
 {color:#de350b}*7663675819538124697*{color}

I then run nodetool to get the endpoints for this token/ks/table

eg
 nodetool getendpoints myks mytable {color:#de350b}*7663675819538124697*{color}
 172.31.10.187
 172.31.12.193
 172.31.13.91

And *the list of endpoints is not correct* as the value outputted in the 
timestamp warning log entry, I suspect, is missing additional 
information/precision so obviously will give back the wrong token and hence the 
wrong endpoints.

Possibly this warning log statement should output the actual partition key 
token in addition to the other information to avoid confusion and the string 
representation of the timestamp be correct.

 

  was:
When compaction encounters a large partition, it outputs a warning in the logs 
e.g.:
(Apologies, had to redact some information)


WARN [CompactionExecutor:343] 2021-09-16 09:28:43,539 BigTableWriter.java:211 - 
Writing large partition XXX/:PROsVuVbHju33:{color:#de350b}*2021-09-16 
05\:00Z*{color} (1.381GiB) to sstable 
/mnt/var/lib/cassandra/data/segment/message-336c5ff04db211ebbffc2980407d44d6/md-58982-big-Data.db


i.e 
[https://github.com/apache/cassandra/blob/cassandra-3.11.5/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java#L211]


*Example Table/insert*


CREATE TABLE myks.mytable (
 sourceid text,
 {color:#de350b}*messagehour timestamp,*{color}
 messagetime timestamp,
 messageid text
 PRIMARY KEY ((sourceid, messagehour), messagetime, messageid)
) ;

 

insert into myks.mytable (sourceid, messagehour, messagetime, messageid) values 
('PROsVuVbHju33', '{color:#de350b}*2021-09-16 05:00Z'*{color}, '2021-09-16 
05:00:31Z', '123ABC');


If I then need to try and work out which nodes in the cluster contain the 
replica data for this partition (from the logs), I will get the token via CQL

eg:
select distinct token(sourceid,messagehour) from myks.mytable where 
sourceid='PROsVuVbHju33' and messagehour='{color:#de350b}*2021-09-16 
05:00Z*{color}';
 
 system.token(sourceid, messagehour)
-
 {color:#de350b}*7663675819538124697*{color}
 
I then run nodetool to get the endpoints for this token/ks/table
 
eg
nodetool getendpoints myks mytable {color:#de350b}*7663675819538124697*{color}
172.31.10.187
172.31.12.193
172.31.13.91
 
And *the list of endpoints is not correct* as the value outputted in the 
timestamp warning log entry, I suspect, is missing additional 
information/precision so obviously will give back the wrong token and hence the 
wrong endpoints.
 
Possibly this warning log statement should output the actual partition key 
token in addition to the other information to avoid confusion and the string 
representation of the timestamp be correct.

 


> Timestamp String displayed for partition compaction warnings is not correct
> ---
>
> Key: CASSANDRA-16961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16961
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Johnny Miller
>Priority: Normal
>
> When compaction encounters a large partition, it outputs a warning in the 
> logs e.g.:
>  (Apologies, had to redact some information)
> WARN [CompactionExecutor:343] 2021-09-16 09:28:43,539 BigTableWriter.java:211 
> - Writing large partition 

[jira] [Created] (CASSANDRA-16961) Timestamp String displayed for partition compaction warnings is not correct

2021-09-16 Thread Johnny Miller (Jira)
Johnny Miller created CASSANDRA-16961:
-

 Summary: Timestamp String displayed for partition compaction 
warnings is not correct
 Key: CASSANDRA-16961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-16961
 Project: Cassandra
  Issue Type: Bug
Reporter: Johnny Miller


When compaction encounters a large partition, it outputs a warning in the logs 
e.g.:
(Apologies, had to redact some information)


WARN [CompactionExecutor:343] 2021-09-16 09:28:43,539 BigTableWriter.java:211 - 
Writing large partition XXX/:PROsVuVbHju33:{color:#de350b}*2021-09-16 
05\:00Z*{color} (1.381GiB) to sstable 
/mnt/var/lib/cassandra/data/segment/message-336c5ff04db211ebbffc2980407d44d6/md-58982-big-Data.db


i.e 
[https://github.com/apache/cassandra/blob/cassandra-3.11.5/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java#L211]


*Example Table/insert*


CREATE TABLE myks.mytable (
 sourceid text,
 {color:#de350b}*messagehour timestamp,*{color}
 messagetime timestamp,
 messageid text
 PRIMARY KEY ((sourceid, messagehour), messagetime, messageid)
) ;

 

insert into myks.mytable (sourceid, messagehour, messagetime, messageid) values 
('PROsVuVbHju33', '{color:#de350b}*2021-09-16 05:00Z'*{color}, '2021-09-16 
05:00:31Z', '123ABC');


If I then need to try and work out which nodes in the cluster contain the 
replica data for this partition (from the logs), I will get the token via CQL

eg:
select distinct token(sourceid,messagehour) from myks.mytable where 
sourceid='PROsVuVbHju33' and messagehour='{color:#de350b}*2021-09-16 
05:00Z*{color}';
 
 system.token(sourceid, messagehour)
-
 {color:#de350b}*7663675819538124697*{color}
 
I then run nodetool to get the endpoints for this token/ks/table
 
eg
nodetool getendpoints myks mytable {color:#de350b}*7663675819538124697*{color}
172.31.10.187
172.31.12.193
172.31.13.91
 
And *the list of endpoints is not correct* as the value outputted in the 
timestamp warning log entry, I suspect, is missing additional 
information/precision so obviously will give back the wrong token and hence the 
wrong endpoints.
 
Possibly this warning log statement should output the actual partition key 
token in addition to the other information to avoid confusion and the string 
representation of the timestamp be correct.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-16043) Perform garbage collection on specific partitions or range of partitions

2020-08-11 Thread Johnny Miller (Jira)
Johnny Miller created CASSANDRA-16043:
-

 Summary: Perform garbage collection on specific partitions or 
range of partitions
 Key: CASSANDRA-16043
 URL: https://issues.apache.org/jira/browse/CASSANDRA-16043
 Project: Cassandra
  Issue Type: New Feature
Reporter: Johnny Miller


Some of our data is quite seasonal and variable based on partitions and certain 
partitions tend to contain significantly more tombstones than others. We 
currently run nodetool garbagecollect on tables when this becomes an issue.

However, if the time to complete and resources required to only garbage collect 
particular partitions on specific tables could improve the performance of this 
activity it would be useful to only target the specific partitions we need.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13702) Error on keyspace create/alter if referencing non-existing DC in cluster

2017-07-20 Thread Johnny Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Miller updated CASSANDRA-13702:
--
Priority: Minor  (was: Major)

> Error on keyspace create/alter if referencing non-existing DC in cluster
> 
>
> Key: CASSANDRA-13702
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13702
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Johnny Miller
>Priority: Minor
>
> It is possible to create/alter a keyspace using NetworkTopologyStrategy and a 
> DC that does not exist. It would be great if this was validated to prevent 
> accidents.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13702) Error on keyspace create/alter if referencing non-existing DC in cluster

2017-07-20 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-13702:
-

 Summary: Error on keyspace create/alter if referencing 
non-existing DC in cluster
 Key: CASSANDRA-13702
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13702
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller


It is possible to create/alter a keyspace using NetworkTopologyStrategy and a 
DC that does not exist. It would be great if this was validated to prevent 
accidents.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8907) Raise GCInspector alerts to WARN

2015-09-04 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731344#comment-14731344
 ] 

Johnny Miller commented on CASSANDRA-8907:
--

[~JoshuaMcKenzie] [~eanujwa] I would advocate a default of disabled and when 
disabled log out at INFO with the current behaviour. This should avoid breaking 
any existing log monitoring or alarming anyone with a load of new WARN log 
messages following a minor upgrade.

That way the onus is on the user to determine what level of pause for their 
specific use case warrants a WARN log. As long as its clearly documented and in 
the yaml, users should be aware of it when reviewing their config.

Maybe we should revisit the default level in a later major release following 
feedback?

> Raise GCInspector alerts to WARN
> 
>
> Key: CASSANDRA-8907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8907
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Adam Hattrell
>Assignee: Amit Singh Chowdhery
>  Labels: patch
> Attachments: cassnadra-8907.patch
>
>
> I'm fairly regularly running into folks wondering why their applications are 
> reporting down nodes.  Yet, they report, when they grepped the logs they have 
> no WARN or ERRORs listed.
> Nine times out of ten, when I look through the logs we see a ton of ParNew or 
> CMS gc pauses occurring similar to the following:
> INFO [ScheduledTasks:1] 2013-03-07 18:44:46,795 GCInspector.java (line 122) 
> GC for ConcurrentMarkSweep: 1835 ms for 3 collections, 2606015656 used; max 
> is 10611589120
> INFO [ScheduledTasks:1] 2013-03-07 19:45:08,029 GCInspector.java (line 122) 
> GC for ParNew: 9866 ms for 8 collections, 2910124308 used; max is 6358564864
> To my mind these should be WARN's as they have the potential to be 
> significantly impacting the clusters performance as a whole.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8907) Raise GCInspector alerts to WARN

2015-09-04 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731344#comment-14731344
 ] 

Johnny Miller edited comment on CASSANDRA-8907 at 9/4/15 8:15 PM:
--

[~JoshuaMcKenzie] [~eanujwa] I would advocate a default of disabled and when 
disabled log out at INFO with the current behaviour. This should avoid breaking 
any existing log monitoring or alarming anyone with a load of new WARN log 
messages following a minor upgrade.

That way the onus is on the user to determine what level of pause for their 
specific use case warrants a WARN log. As long as its clearly documented and in 
the yaml, users should be aware of it when reviewing their config.

Maybe we should revisit the default level in a later major release following 
feedback? Possibly default it to 200ms in 3.0?


was (Author: johnny15676):
[~JoshuaMcKenzie] [~eanujwa] I would advocate a default of disabled and when 
disabled log out at INFO with the current behaviour. This should avoid breaking 
any existing log monitoring or alarming anyone with a load of new WARN log 
messages following a minor upgrade.

That way the onus is on the user to determine what level of pause for their 
specific use case warrants a WARN log. As long as its clearly documented and in 
the yaml, users should be aware of it when reviewing their config.

Maybe we should revisit the default level in a later major release following 
feedback?

> Raise GCInspector alerts to WARN
> 
>
> Key: CASSANDRA-8907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8907
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Adam Hattrell
>Assignee: Amit Singh Chowdhery
>  Labels: patch
> Attachments: cassnadra-8907.patch
>
>
> I'm fairly regularly running into folks wondering why their applications are 
> reporting down nodes.  Yet, they report, when they grepped the logs they have 
> no WARN or ERRORs listed.
> Nine times out of ten, when I look through the logs we see a ton of ParNew or 
> CMS gc pauses occurring similar to the following:
> INFO [ScheduledTasks:1] 2013-03-07 18:44:46,795 GCInspector.java (line 122) 
> GC for ConcurrentMarkSweep: 1835 ms for 3 collections, 2606015656 used; max 
> is 10611589120
> INFO [ScheduledTasks:1] 2013-03-07 19:45:08,029 GCInspector.java (line 122) 
> GC for ParNew: 9866 ms for 8 collections, 2910124308 used; max is 6358564864
> To my mind these should be WARN's as they have the potential to be 
> significantly impacting the clusters performance as a whole.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8907) Raise GCInspector alerts to WARN

2015-09-04 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731344#comment-14731344
 ] 

Johnny Miller edited comment on CASSANDRA-8907 at 9/4/15 8:16 PM:
--

[~JoshuaMcKenzie] [~eanujwa] I would advocate a default of disabled and when 
disabled log out at INFO with the current behaviour. This should avoid breaking 
any existing log monitoring or alarming anyone with a load of new WARN log 
messages following a minor upgrade.

That way the onus is on the user to determine what level of pause for their 
specific use case warrants a WARN log. As long as its clearly documented and in 
the yaml, users should be aware of it when reviewing their config.

Maybe we should revisit the default level in a later major release following 
feedback?


was (Author: johnny15676):
[~JoshuaMcKenzie] [~eanujwa] I would advocate a default of disabled and when 
disabled log out at INFO with the current behaviour. This should avoid breaking 
any existing log monitoring or alarming anyone with a load of new WARN log 
messages following a minor upgrade.

That way the onus is on the user to determine what level of pause for their 
specific use case warrants a WARN log. As long as its clearly documented and in 
the yaml, users should be aware of it when reviewing their config.

Maybe we should revisit the default level in a later major release following 
feedback? Possibly default it to 200ms in 3.0?

> Raise GCInspector alerts to WARN
> 
>
> Key: CASSANDRA-8907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8907
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Adam Hattrell
>Assignee: Amit Singh Chowdhery
>  Labels: patch
> Attachments: cassnadra-8907.patch
>
>
> I'm fairly regularly running into folks wondering why their applications are 
> reporting down nodes.  Yet, they report, when they grepped the logs they have 
> no WARN or ERRORs listed.
> Nine times out of ten, when I look through the logs we see a ton of ParNew or 
> CMS gc pauses occurring similar to the following:
> INFO [ScheduledTasks:1] 2013-03-07 18:44:46,795 GCInspector.java (line 122) 
> GC for ConcurrentMarkSweep: 1835 ms for 3 collections, 2606015656 used; max 
> is 10611589120
> INFO [ScheduledTasks:1] 2013-03-07 19:45:08,029 GCInspector.java (line 122) 
> GC for ParNew: 9866 ms for 8 collections, 2910124308 used; max is 6358564864
> To my mind these should be WARN's as they have the potential to be 
> significantly impacting the clusters performance as a whole.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8907) Raise GCInspector alerts to WARN

2015-06-15 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585808#comment-14585808
 ] 

Johnny Miller commented on CASSANDRA-8907:
--

[~achowdhe] That sounds good to me.

 Raise GCInspector alerts to WARN
 

 Key: CASSANDRA-8907
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8907
 Project: Cassandra
  Issue Type: Improvement
Reporter: Adam Hattrell

 I'm fairly regularly running into folks wondering why their applications are 
 reporting down nodes.  Yet, they report, when they grepped the logs they have 
 no WARN or ERRORs listed.
 Nine times out of ten, when I look through the logs we see a ton of ParNew or 
 CMS gc pauses occurring similar to the following:
 INFO [ScheduledTasks:1] 2013-03-07 18:44:46,795 GCInspector.java (line 122) 
 GC for ConcurrentMarkSweep: 1835 ms for 3 collections, 2606015656 used; max 
 is 10611589120
 INFO [ScheduledTasks:1] 2013-03-07 19:45:08,029 GCInspector.java (line 122) 
 GC for ParNew: 9866 ms for 8 collections, 2910124308 used; max is 6358564864
 To my mind these should be WARN's as they have the potential to be 
 significantly impacting the clusters performance as a whole.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8907) Raise GCInspector alerts to WARN

2015-03-31 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388534#comment-14388534
 ] 

Johnny Miller commented on CASSANDRA-8907:
--

Big +1 on this one. I would even go a bit further and externalise the 200ms 
threshold so it can be tuned for specific setups.

 Raise GCInspector alerts to WARN
 

 Key: CASSANDRA-8907
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8907
 Project: Cassandra
  Issue Type: Improvement
Reporter: Adam Hattrell

 I'm fairly regularly running into folks wondering why their applications are 
 reporting down nodes.  Yet, they report, when they grepped the logs they have 
 no WARN or ERRORs listed.
 Nine times out of ten, when I look through the logs we see a ton of ParNew or 
 CMS gc pauses occurring similar to the following:
 INFO [ScheduledTasks:1] 2013-03-07 18:44:46,795 GCInspector.java (line 122) 
 GC for ConcurrentMarkSweep: 1835 ms for 3 collections, 2606015656 used; max 
 is 10611589120
 INFO [ScheduledTasks:1] 2013-03-07 19:45:08,029 GCInspector.java (line 122) 
 GC for ParNew: 9866 ms for 8 collections, 2910124308 used; max is 6358564864
 To my mind these should be WARN's as they have the potential to be 
 significantly impacting the clusters performance as a whole.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8790) Improve handling of non-printable unicode characters in text fields and CQLSH

2015-02-12 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317902#comment-14317902
 ] 

Johnny Miller edited comment on CASSANDRA-8790 at 2/12/15 9:45 AM:
---

[~brandon.williams] I appreciate your point, but [~thobbs] example isn't the 
actual problem. Its specifically non-printable unicode characters.

This isn't a problem, when your using the drivers - it is only when using CQLSH.

The problem is that if I start writing data like this via the drivers and then 
for some reason I ever need to query it via CQLSH I will get the wrong answer 
when doing a select.

i.e. SELECT * from testunicode where id = 'state\u001Ccard' will not return any 
results via CQLSH when the data does actually exist in my table and the same 
query via the drivers (java) would actually return a result.

The workaround to use blobs is not great (IMHO) - its a shame to have model 
your data round this specific CQLSH limitation

We should either provide some functionality to handle this this or 
alternatively error if someone enters a non-printable unicode character in 
CQLSH as the answer we get back is incorrect and likely to mislead people.


was (Author: johnny15676):
[~brandon.williams] I appreciate your point, but [~thobbs] example isn't the 
actual problem. Its specifically non-printable unicode characters.

This isn't a problem, when your using the drivers - it is only when using CQLSH.

The problem is that if I start writing data like this via the drivers and then 
for some reason I ever need to every query it via CQLSH I will get the wrong 
answer when doing a select.

i.e. SELECT * from testunicode where id = 'state\u001Ccard' will not return any 
results via CQLSH when the data does actually exist in my table and the same 
query via the drivers (java) would actually return a result.

The workaround to use blobs is not great (IMHO) - its a shame to have model 
your data round this specific CQLSH limitation

We should either provide some functionality to handle this this or 
alternatively error if someone enters a non-printable unicode character in 
CQLSH as the answer we get back is incorrect and likely to mislead people.

 Improve handling of non-printable unicode characters in text fields and CQLSH
 -

 Key: CASSANDRA-8790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8790
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 Currently to store a string/text value that contains a non-printable unicode 
 character and than subsequently be able to query it CQLSH I need to store the 
 field as a blob via the blobAsText and textAsBlob functions. 
 This is not really optimal - it would be better if CQLSH handled this rather 
 than having to model data around this limitation.
 For example:
 {code:title=Example Code|borderStyle=solid}
 String createTableCql = CREATE TABLE IF NOT EXISTS test_ks.testunicode (id 
 blob PRIMARY KEY, inserted_on timestamp, lorem text);
 session.execute(createTableCql);
 System.out.println(Table created.); 
   
 String dimension1 = state;
 String dimension2 = card;
 String key = dimension1 + '\u001C' + dimension2;
 Date now = new Date();
 String lorem = Lorem ipsum dolor sit amet.;
   
 String insertcql = INSERT INTO testunicode (id, inserted_on, lorem) VALUES 
 (textAsBlob(?), ?, ?);
 PreparedStatement ps = session.prepare(insertcql);
 BoundStatement bs = new BoundStatement(ps);
 bs.bind(key, now, lorem);
 session.execute(bs);
 System.out.println(Row inserted with key +key); 
   
 String selectcql = SELECT blobAsText(id) AS id, inserted_on, lorem FROM 
 testunicode WHERE id = textAsBlob(?);
 PreparedStatement ps2 = session.prepare(selectcql);
 BoundStatement bs2 = new BoundStatement(ps2);
 bs2.bind(key);
 ResultSet results = session.execute(bs2);
   
 System.out.println(Got results...);
   
 for (Row row : results) {
   System.out.println(String.format(%-30s\t%-20s\t%-20s, 
 row.getString(id), row.getDate(inserted_on), row.getString(lorem)));
 }
 {code}
 And to query via CQLSH:
 {code}
 select * from testunicode where id = 0x73746174651c63617264 ;
  id | inserted_on  | lorem
 +--+-
  0x73746174651c63617264 | 2015-02-11 20:32:20+ | Lorem ipsum dolor sit 
 amet.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8790) Improve handling of non-printable unicode characters in text fields and CQLSH

2015-02-12 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317902#comment-14317902
 ] 

Johnny Miller commented on CASSANDRA-8790:
--

[~brandon.williams] I appreciate your point, but [~thobbs] example isn't the 
actual problem. Its specifically non-printable unicode characters.

This isn't a problem, when your using the drivers - it is only when using CQLSH.

The problem is that if I start writing data like this via the drivers and then 
for some reason I ever need to every query it via CQLSH I will get the wrong 
answer when doing a select.

i.e. SELECT * from testunicode where id = 'state\u001Ccard' will not return any 
results via CQLSH when the data does actually exist in my table and the same 
query via the drivers (java) would actually return a result.

The workaround to use blobs is not great (IMHO) - its a shame to have model 
your data round this specific CQLSH limitation

We should either provide some functionality to handle this this or 
alternatively error if someone enters a non-printable unicode character in 
CQLSH as the answer we get back is incorrect and likely to mislead people.

 Improve handling of non-printable unicode characters in text fields and CQLSH
 -

 Key: CASSANDRA-8790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8790
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 Currently to store a string/text value that contains a non-printable unicode 
 character and than subsequently be able to query it CQLSH I need to store the 
 field as a blob via the blobAsText and textAsBlob functions. 
 This is not really optimal - it would be better if CQLSH handled this rather 
 than having to model data around this limitation.
 For example:
 {code:title=Example Code|borderStyle=solid}
 String createTableCql = CREATE TABLE IF NOT EXISTS test_ks.testunicode (id 
 blob PRIMARY KEY, inserted_on timestamp, lorem text);
 session.execute(createTableCql);
 System.out.println(Table created.); 
   
 String dimension1 = state;
 String dimension2 = card;
 String key = dimension1 + '\u001C' + dimension2;
 Date now = new Date();
 String lorem = Lorem ipsum dolor sit amet.;
   
 String insertcql = INSERT INTO testunicode (id, inserted_on, lorem) VALUES 
 (textAsBlob(?), ?, ?);
 PreparedStatement ps = session.prepare(insertcql);
 BoundStatement bs = new BoundStatement(ps);
 bs.bind(key, now, lorem);
 session.execute(bs);
 System.out.println(Row inserted with key +key); 
   
 String selectcql = SELECT blobAsText(id) AS id, inserted_on, lorem FROM 
 testunicode WHERE id = textAsBlob(?);
 PreparedStatement ps2 = session.prepare(selectcql);
 BoundStatement bs2 = new BoundStatement(ps2);
 bs2.bind(key);
 ResultSet results = session.execute(bs2);
   
 System.out.println(Got results...);
   
 for (Row row : results) {
   System.out.println(String.format(%-30s\t%-20s\t%-20s, 
 row.getString(id), row.getDate(inserted_on), row.getString(lorem)));
 }
 {code}
 And to query via CQLSH:
 {code}
 select * from testunicode where id = 0x73746174651c63617264 ;
  id | inserted_on  | lorem
 +--+-
  0x73746174651c63617264 | 2015-02-11 20:32:20+ | Lorem ipsum dolor sit 
 amet.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8790) Improve handling of non-printable unicode characters in text fields and CQLSH

2015-02-12 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317902#comment-14317902
 ] 

Johnny Miller edited comment on CASSANDRA-8790 at 2/12/15 9:48 AM:
---

[~brandon.williams] I appreciate your point, but [~thobbs] example isn't the 
actual problem. Its specifically non-printable unicode characters.

This isn't a problem, when your using the drivers - it is only when using CQLSH.

The problem is that if I start writing data like this via the drivers and then 
for some reason I ever need to query it via CQLSH I will get the wrong answer 
when doing a select.

i.e. SELECT * from testunicode where id = 'state\u001Ccard' will not return any 
results via CQLSH when the data does actually exist in my table and the same 
query via the drivers (java) would actually return a result.

The workaround to use blobs is not great (IMHO) - its a shame to have model 
your data round this specific CQLSH limitation

We should either provide some functionality to handle this this or 
alternatively error if someone enters a non-printable unicode character in 
CQLSH as the answer we get back is incorrect and likely to mislead people.

If you have an example of handling this via CQLSH on any OS, please share it


was (Author: johnny15676):
[~brandon.williams] I appreciate your point, but [~thobbs] example isn't the 
actual problem. Its specifically non-printable unicode characters.

This isn't a problem, when your using the drivers - it is only when using CQLSH.

The problem is that if I start writing data like this via the drivers and then 
for some reason I ever need to query it via CQLSH I will get the wrong answer 
when doing a select.

i.e. SELECT * from testunicode where id = 'state\u001Ccard' will not return any 
results via CQLSH when the data does actually exist in my table and the same 
query via the drivers (java) would actually return a result.

The workaround to use blobs is not great (IMHO) - its a shame to have model 
your data round this specific CQLSH limitation

We should either provide some functionality to handle this this or 
alternatively error if someone enters a non-printable unicode character in 
CQLSH as the answer we get back is incorrect and likely to mislead people.

 Improve handling of non-printable unicode characters in text fields and CQLSH
 -

 Key: CASSANDRA-8790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8790
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 Currently to store a string/text value that contains a non-printable unicode 
 character and than subsequently be able to query it CQLSH I need to store the 
 field as a blob via the blobAsText and textAsBlob functions. 
 This is not really optimal - it would be better if CQLSH handled this rather 
 than having to model data around this limitation.
 For example:
 {code:title=Example Code|borderStyle=solid}
 String createTableCql = CREATE TABLE IF NOT EXISTS test_ks.testunicode (id 
 blob PRIMARY KEY, inserted_on timestamp, lorem text);
 session.execute(createTableCql);
 System.out.println(Table created.); 
   
 String dimension1 = state;
 String dimension2 = card;
 String key = dimension1 + '\u001C' + dimension2;
 Date now = new Date();
 String lorem = Lorem ipsum dolor sit amet.;
   
 String insertcql = INSERT INTO testunicode (id, inserted_on, lorem) VALUES 
 (textAsBlob(?), ?, ?);
 PreparedStatement ps = session.prepare(insertcql);
 BoundStatement bs = new BoundStatement(ps);
 bs.bind(key, now, lorem);
 session.execute(bs);
 System.out.println(Row inserted with key +key); 
   
 String selectcql = SELECT blobAsText(id) AS id, inserted_on, lorem FROM 
 testunicode WHERE id = textAsBlob(?);
 PreparedStatement ps2 = session.prepare(selectcql);
 BoundStatement bs2 = new BoundStatement(ps2);
 bs2.bind(key);
 ResultSet results = session.execute(bs2);
   
 System.out.println(Got results...);
   
 for (Row row : results) {
   System.out.println(String.format(%-30s\t%-20s\t%-20s, 
 row.getString(id), row.getDate(inserted_on), row.getString(lorem)));
 }
 {code}
 And to query via CQLSH:
 {code}
 select * from testunicode where id = 0x73746174651c63617264 ;
  id | inserted_on  | lorem
 +--+-
  0x73746174651c63617264 | 2015-02-11 20:32:20+ | Lorem ipsum dolor sit 
 amet.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8790) Improve handling of non-printable unicode characters in text fields and CQLSH

2015-02-12 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317902#comment-14317902
 ] 

Johnny Miller edited comment on CASSANDRA-8790 at 2/12/15 9:49 AM:
---

[~brandon.williams] I appreciate your point, but [~thobbs] example isn't the 
actual problem. Its specifically non-printable unicode characters.

This isn't a problem, when your using the drivers - it is only when using CQLSH.

The problem is that if I start writing data like this via the drivers and then 
for some reason I ever need to query it via CQLSH I will get the wrong answer 
when doing a select.

i.e. SELECT * from testunicode where id = 'state\u001Ccard' will not return any 
results via CQLSH when the data does actually exist in my table and the same 
query via the drivers (java) would actually return a result.

The workaround to use blobs is not great (IMHO) - its a shame to have model 
your data round this specific CQLSH limitation

We should either provide some functionality to handle this or alternatively 
error if someone enters a non-printable unicode character in CQLSH as the 
answer we get back is incorrect and likely to mislead people.

If you have an example of handling this via CQLSH on any OS, please share it


was (Author: johnny15676):
[~brandon.williams] I appreciate your point, but [~thobbs] example isn't the 
actual problem. Its specifically non-printable unicode characters.

This isn't a problem, when your using the drivers - it is only when using CQLSH.

The problem is that if I start writing data like this via the drivers and then 
for some reason I ever need to query it via CQLSH I will get the wrong answer 
when doing a select.

i.e. SELECT * from testunicode where id = 'state\u001Ccard' will not return any 
results via CQLSH when the data does actually exist in my table and the same 
query via the drivers (java) would actually return a result.

The workaround to use blobs is not great (IMHO) - its a shame to have model 
your data round this specific CQLSH limitation

We should either provide some functionality to handle this this or 
alternatively error if someone enters a non-printable unicode character in 
CQLSH as the answer we get back is incorrect and likely to mislead people.

If you have an example of handling this via CQLSH on any OS, please share it

 Improve handling of non-printable unicode characters in text fields and CQLSH
 -

 Key: CASSANDRA-8790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8790
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 Currently to store a string/text value that contains a non-printable unicode 
 character and than subsequently be able to query it CQLSH I need to store the 
 field as a blob via the blobAsText and textAsBlob functions. 
 This is not really optimal - it would be better if CQLSH handled this rather 
 than having to model data around this limitation.
 For example:
 {code:title=Example Code|borderStyle=solid}
 String createTableCql = CREATE TABLE IF NOT EXISTS test_ks.testunicode (id 
 blob PRIMARY KEY, inserted_on timestamp, lorem text);
 session.execute(createTableCql);
 System.out.println(Table created.); 
   
 String dimension1 = state;
 String dimension2 = card;
 String key = dimension1 + '\u001C' + dimension2;
 Date now = new Date();
 String lorem = Lorem ipsum dolor sit amet.;
   
 String insertcql = INSERT INTO testunicode (id, inserted_on, lorem) VALUES 
 (textAsBlob(?), ?, ?);
 PreparedStatement ps = session.prepare(insertcql);
 BoundStatement bs = new BoundStatement(ps);
 bs.bind(key, now, lorem);
 session.execute(bs);
 System.out.println(Row inserted with key +key); 
   
 String selectcql = SELECT blobAsText(id) AS id, inserted_on, lorem FROM 
 testunicode WHERE id = textAsBlob(?);
 PreparedStatement ps2 = session.prepare(selectcql);
 BoundStatement bs2 = new BoundStatement(ps2);
 bs2.bind(key);
 ResultSet results = session.execute(bs2);
   
 System.out.println(Got results...);
   
 for (Row row : results) {
   System.out.println(String.format(%-30s\t%-20s\t%-20s, 
 row.getString(id), row.getDate(inserted_on), row.getString(lorem)));
 }
 {code}
 And to query via CQLSH:
 {code}
 select * from testunicode where id = 0x73746174651c63617264 ;
  id | inserted_on  | lorem
 +--+-
  0x73746174651c63617264 | 2015-02-11 20:32:20+ | Lorem ipsum dolor sit 
 amet.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8790) Improve handling of escape sequence for CQL string literals

2015-02-12 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318135#comment-14318135
 ] 

Johnny Miller commented on CASSANDRA-8790:
--

Didn't think of using a file - thats a good idea!

 Improve handling of escape sequence for CQL string literals
 ---

 Key: CASSANDRA-8790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8790
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 Currently to store a string/text value that contains a non-printable unicode 
 character and than subsequently be able to query it CQLSH I need to store the 
 field as a blob via the blobAsText and textAsBlob functions. 
 This is not really optimal - it would be better if CQLSH handled this rather 
 than having to model data around this limitation.
 For example:
 {code:title=Example Code|borderStyle=solid}
 String createTableCql = CREATE TABLE IF NOT EXISTS test_ks.testunicode (id 
 blob PRIMARY KEY, inserted_on timestamp, lorem text);
 session.execute(createTableCql);
 System.out.println(Table created.); 
   
 String dimension1 = state;
 String dimension2 = card;
 String key = dimension1 + '\u001C' + dimension2;
 Date now = new Date();
 String lorem = Lorem ipsum dolor sit amet.;
   
 String insertcql = INSERT INTO testunicode (id, inserted_on, lorem) VALUES 
 (textAsBlob(?), ?, ?);
 PreparedStatement ps = session.prepare(insertcql);
 BoundStatement bs = new BoundStatement(ps);
 bs.bind(key, now, lorem);
 session.execute(bs);
 System.out.println(Row inserted with key +key); 
   
 String selectcql = SELECT blobAsText(id) AS id, inserted_on, lorem FROM 
 testunicode WHERE id = textAsBlob(?);
 PreparedStatement ps2 = session.prepare(selectcql);
 BoundStatement bs2 = new BoundStatement(ps2);
 bs2.bind(key);
 ResultSet results = session.execute(bs2);
   
 System.out.println(Got results...);
   
 for (Row row : results) {
   System.out.println(String.format(%-30s\t%-20s\t%-20s, 
 row.getString(id), row.getDate(inserted_on), row.getString(lorem)));
 }
 {code}
 And to query via CQLSH:
 {code}
 select * from testunicode where id = 0x73746174651c63617264 ;
  id | inserted_on  | lorem
 +--+-
  0x73746174651c63617264 | 2015-02-11 20:32:20+ | Lorem ipsum dolor sit 
 amet.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8790) Improve handling of unicode characters in text fields

2015-02-11 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317006#comment-14317006
 ] 

Johnny Miller commented on CASSANDRA-8790:
--

I think it is specifically around non-printable unicode characters e.g. \u001C

 Improve handling of unicode characters in text fields
 -

 Key: CASSANDRA-8790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8790
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 Currently to store a string/text value that contains a unicode character and 
 than subsequently be able to query it CQLSH I need to store the field as a 
 blob via the blobAsText and textAsBlob functions. 
 This is not really optimal - it would be better if CQLSH handled this rather 
 than having to model data around this limitation.
 For example:
 {code:title=Example Code|borderStyle=solid}
 String createTableCql = CREATE TABLE IF NOT EXISTS test_ks.testunicode (id 
 blob PRIMARY KEY, inserted_on timestamp, lorem text);
 session.execute(createTableCql);
 System.out.println(Table created.); 
   
 String dimension1 = state;
 String dimension2 = card;
 String key = dimension1 + '\u001C' + dimension2;
 Date now = new Date();
 String lorem = Lorem ipsum dolor sit amet.;
   
 String insertcql = INSERT INTO testunicode (id, inserted_on, lorem) VALUES 
 (textAsBlob(?), ?, ?);
 PreparedStatement ps = session.prepare(insertcql);
 BoundStatement bs = new BoundStatement(ps);
 bs.bind(key, now, lorem);
 session.execute(bs);
 System.out.println(Row inserted with key +key); 
   
 String selectcql = SELECT blobAsText(id) AS id, inserted_on, lorem FROM 
 testunicode WHERE id = textAsBlob(?);
 PreparedStatement ps2 = session.prepare(selectcql);
 BoundStatement bs2 = new BoundStatement(ps2);
 bs2.bind(key);
 ResultSet results = session.execute(bs2);
   
 System.out.println(Got results...);
   
 for (Row row : results) {
   System.out.println(String.format(%-30s\t%-20s\t%-20s, 
 row.getString(id), row.getDate(inserted_on), row.getString(lorem)));
 }
 {code}
 And to query via CQLSH:
 select * from testunicode where id = 0x73746174651c63617264 ;
  id | inserted_on  | lorem
 +--+-
  0x73746174651c63617264 | 2015-02-11 20:32:20+ | Lorem ipsum dolor sit 
 amet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8790) Improve handling of non-printable unicode characters in text fields and CQLSH

2015-02-11 Thread Johnny Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Miller updated CASSANDRA-8790:
-
Description: 
Currently to store a string/text value that contains a unicode character and 
than subsequently be able to query it CQLSH I need to store the field as a blob 
via the blobAsText and textAsBlob functions. 

This is not really optimal - it would be better if CQLSH handled this rather 
than having to model data around this limitation.

For example:

{code:title=Example Code|borderStyle=solid}

String createTableCql = CREATE TABLE IF NOT EXISTS test_ks.testunicode (id 
blob PRIMARY KEY, inserted_on timestamp, lorem text);
session.execute(createTableCql);
System.out.println(Table created.);   

String dimension1 = state;
String dimension2 = card;
String key = dimension1 + '\u001C' + dimension2;
Date now = new Date();
String lorem = Lorem ipsum dolor sit amet.;

String insertcql = INSERT INTO testunicode (id, inserted_on, lorem) VALUES 
(textAsBlob(?), ?, ?);
PreparedStatement ps = session.prepare(insertcql);
BoundStatement bs = new BoundStatement(ps);
bs.bind(key, now, lorem);
session.execute(bs);
System.out.println(Row inserted with key +key);   

String selectcql = SELECT blobAsText(id) AS id, inserted_on, lorem FROM 
testunicode WHERE id = textAsBlob(?);
PreparedStatement ps2 = session.prepare(selectcql);
BoundStatement bs2 = new BoundStatement(ps2);
bs2.bind(key);
ResultSet results = session.execute(bs2);

System.out.println(Got results...);

for (Row row : results) {
System.out.println(String.format(%-30s\t%-20s\t%-20s, 
row.getString(id), row.getDate(inserted_on), row.getString(lorem)));
}

{code}

And to query via CQLSH:


select * from testunicode where id = 0x73746174651c63617264 ;

 id | inserted_on  | lorem
+--+-
 0x73746174651c63617264 | 2015-02-11 20:32:20+ | Lorem ipsum dolor sit amet.


  was:
Currently to store a string/text value that contains a unicode character and 
than subsequently be able to query it CQLSH I need to store the field as a blob 
via the blobAsText and textAsBlob functions. 

This is not really optimal - it would be better if CQLSH handled this rather 
than having to model data around this limitation.

For example:

{code:title=Example Code|borderStyle=solid}

String createTableCql = CREATE TABLE IF NOT EXISTS test_ks.testunicode (id 
blob PRIMARY KEY, inserted_on timestamp, lorem text);
session.execute(createTableCql);
System.out.println(Table created.);   

String dimension1 = state;
String dimension2 = card;
String key = dimension1 + '\u001C' + dimension2;
Date now = new Date();
String lorem = Lorem ipsum dolor sit amet.;

String insertcql = INSERT INTO testunicode (id, inserted_on, lorem) VALUES 
(textAsBlob(?), ?, ?);
PreparedStatement ps = session.prepare(insertcql);
BoundStatement bs = new BoundStatement(ps);
bs.bind(key, now, lorem);
session.execute(bs);
System.out.println(Row inserted with key +key);   

String selectcql = SELECT blobAsText(id) AS id, inserted_on, lorem FROM 
testunicode WHERE id = textAsBlob(?);
PreparedStatement ps2 = session.prepare(selectcql);
BoundStatement bs2 = new BoundStatement(ps2);
bs2.bind(key);
ResultSet results = session.execute(bs2);

System.out.println(Got results...);

for (Row row : results) {
System.out.println(String.format(%-30s\t%-20s\t%-20s, 
row.getString(id), row.getDate(inserted_on), row.getString(lorem)));
}

{code}

And to query via CQLSH:


select * from testunicode where id = 0x73746174651c63617264 ;

 id | inserted_on  | lorem
+--+-
 0x73746174651c63617264 | 2015-02-11 20:32:20+ | Lorem ipsum dolor sit amet.

Summary: Improve handling of non-printable unicode characters in text 
fields and CQLSH  (was: Improve handling of unicode characters in text fields)

 Improve handling of non-printable unicode characters in text fields and CQLSH
 -

 Key: CASSANDRA-8790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8790
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 Currently to store a string/text value that contains a unicode character and 
 than subsequently be able to query it CQLSH I need to store the field as a 
 blob via the blobAsText and textAsBlob functions. 
 This is not really optimal - it would be better if CQLSH handled this rather 
 

[jira] [Created] (CASSANDRA-8790) Improve handling of unicode characters in text fields

2015-02-11 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-8790:


 Summary: Improve handling of unicode characters in text fields
 Key: CASSANDRA-8790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8790
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor


Currently to store a string/text value that contains a unicode character and 
than subsequently be able to query it CQLSH I need to store the field as a blob 
via the blobAsText and textAsBlob functions. 

This is not really optimal - it would be better if CQLSH handled this rather 
than having to model data around this limitation.

For example:

{code:title=Example Code|borderStyle=solid}

String createTableCql = CREATE TABLE IF NOT EXISTS test_ks.testunicode (id 
blob PRIMARY KEY, inserted_on timestamp, lorem text);
session.execute(createTableCql);
System.out.println(Table created.);   

String dimension1 = state;
String dimension2 = card;
String key = dimension1 + '\u001C' + dimension2;
Date now = new Date();
String lorem = Lorem ipsum dolor sit amet.;

String insertcql = INSERT INTO testunicode (id, inserted_on, lorem) VALUES 
(textAsBlob(?), ?, ?);
PreparedStatement ps = session.prepare(insertcql);
BoundStatement bs = new BoundStatement(ps);
bs.bind(key, now, lorem);
session.execute(bs);
System.out.println(Row inserted with key +key);   

String selectcql = SELECT blobAsText(id) AS id, inserted_on, lorem FROM 
testunicode WHERE id = textAsBlob(?);
PreparedStatement ps2 = session.prepare(selectcql);
BoundStatement bs2 = new BoundStatement(ps2);
bs2.bind(key);
ResultSet results = session.execute(bs2);

System.out.println(Got results...);

for (Row row : results) {
System.out.println(String.format(%-30s\t%-20s\t%-20s, 
row.getString(id), row.getDate(inserted_on), row.getString(lorem)));
}

{code}

And to query via CQLSH:


select * from testunicode where id = 0x73746174651c63617264 ;

 id | inserted_on  | lorem
+--+-
 0x73746174651c63617264 | 2015-02-11 20:32:20+ | Lorem ipsum dolor sit amet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8790) Improve handling of non-printable unicode characters in text fields and CQLSH

2015-02-11 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317036#comment-14317036
 ] 

Johnny Miller commented on CASSANDRA-8790:
--

The cause of this would just appear to be how CQLSH displays and interprets 
characters like this. The actual data is stored correctly.

This works via CQLSH:
{code}
select * from testunicode1 where id = 'ק ר ש';
{code}

This doesn't:
{code}
SELECT * from testunicode where id = 'state\u001Ccard';
{code}

A potential solution is to introduce a function in CQLSH that will enable users 
to escape and unescape fields such as this.

For example
{code}
SELECT * from testunicode where id = escape('state\u001Ccard');
{code}


 Improve handling of non-printable unicode characters in text fields and CQLSH
 -

 Key: CASSANDRA-8790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8790
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 Currently to store a string/text value that contains a unicode character and 
 than subsequently be able to query it CQLSH I need to store the field as a 
 blob via the blobAsText and textAsBlob functions. 
 This is not really optimal - it would be better if CQLSH handled this rather 
 than having to model data around this limitation.
 For example:
 {code:title=Example Code|borderStyle=solid}
 String createTableCql = CREATE TABLE IF NOT EXISTS test_ks.testunicode (id 
 blob PRIMARY KEY, inserted_on timestamp, lorem text);
 session.execute(createTableCql);
 System.out.println(Table created.); 
   
 String dimension1 = state;
 String dimension2 = card;
 String key = dimension1 + '\u001C' + dimension2;
 Date now = new Date();
 String lorem = Lorem ipsum dolor sit amet.;
   
 String insertcql = INSERT INTO testunicode (id, inserted_on, lorem) VALUES 
 (textAsBlob(?), ?, ?);
 PreparedStatement ps = session.prepare(insertcql);
 BoundStatement bs = new BoundStatement(ps);
 bs.bind(key, now, lorem);
 session.execute(bs);
 System.out.println(Row inserted with key +key); 
   
 String selectcql = SELECT blobAsText(id) AS id, inserted_on, lorem FROM 
 testunicode WHERE id = textAsBlob(?);
 PreparedStatement ps2 = session.prepare(selectcql);
 BoundStatement bs2 = new BoundStatement(ps2);
 bs2.bind(key);
 ResultSet results = session.execute(bs2);
   
 System.out.println(Got results...);
   
 for (Row row : results) {
   System.out.println(String.format(%-30s\t%-20s\t%-20s, 
 row.getString(id), row.getDate(inserted_on), row.getString(lorem)));
 }
 {code}
 And to query via CQLSH:
 select * from testunicode where id = 0x73746174651c63617264 ;
  id | inserted_on  | lorem
 +--+-
  0x73746174651c63617264 | 2015-02-11 20:32:20+ | Lorem ipsum dolor sit 
 amet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8790) Improve handling of non-printable unicode characters in text fields and CQLSH

2015-02-11 Thread Johnny Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Miller updated CASSANDRA-8790:
-
Description: 
Currently to store a string/text value that contains a non-printable unicode 
character and than subsequently be able to query it CQLSH I need to store the 
field as a blob via the blobAsText and textAsBlob functions. 

This is not really optimal - it would be better if CQLSH handled this rather 
than having to model data around this limitation.

For example:

{code:title=Example Code|borderStyle=solid}

String createTableCql = CREATE TABLE IF NOT EXISTS test_ks.testunicode (id 
blob PRIMARY KEY, inserted_on timestamp, lorem text);
session.execute(createTableCql);
System.out.println(Table created.);   

String dimension1 = state;
String dimension2 = card;
String key = dimension1 + '\u001C' + dimension2;
Date now = new Date();
String lorem = Lorem ipsum dolor sit amet.;

String insertcql = INSERT INTO testunicode (id, inserted_on, lorem) VALUES 
(textAsBlob(?), ?, ?);
PreparedStatement ps = session.prepare(insertcql);
BoundStatement bs = new BoundStatement(ps);
bs.bind(key, now, lorem);
session.execute(bs);
System.out.println(Row inserted with key +key);   

String selectcql = SELECT blobAsText(id) AS id, inserted_on, lorem FROM 
testunicode WHERE id = textAsBlob(?);
PreparedStatement ps2 = session.prepare(selectcql);
BoundStatement bs2 = new BoundStatement(ps2);
bs2.bind(key);
ResultSet results = session.execute(bs2);

System.out.println(Got results...);

for (Row row : results) {
System.out.println(String.format(%-30s\t%-20s\t%-20s, 
row.getString(id), row.getDate(inserted_on), row.getString(lorem)));
}

{code}

And to query via CQLSH:

{code}
select * from testunicode where id = 0x73746174651c63617264 ;

 id | inserted_on  | lorem
+--+-
 0x73746174651c63617264 | 2015-02-11 20:32:20+ | Lorem ipsum dolor sit amet.
{code}

  was:
Currently to store a string/text value that contains a unicode character and 
than subsequently be able to query it CQLSH I need to store the field as a blob 
via the blobAsText and textAsBlob functions. 

This is not really optimal - it would be better if CQLSH handled this rather 
than having to model data around this limitation.

For example:

{code:title=Example Code|borderStyle=solid}

String createTableCql = CREATE TABLE IF NOT EXISTS test_ks.testunicode (id 
blob PRIMARY KEY, inserted_on timestamp, lorem text);
session.execute(createTableCql);
System.out.println(Table created.);   

String dimension1 = state;
String dimension2 = card;
String key = dimension1 + '\u001C' + dimension2;
Date now = new Date();
String lorem = Lorem ipsum dolor sit amet.;

String insertcql = INSERT INTO testunicode (id, inserted_on, lorem) VALUES 
(textAsBlob(?), ?, ?);
PreparedStatement ps = session.prepare(insertcql);
BoundStatement bs = new BoundStatement(ps);
bs.bind(key, now, lorem);
session.execute(bs);
System.out.println(Row inserted with key +key);   

String selectcql = SELECT blobAsText(id) AS id, inserted_on, lorem FROM 
testunicode WHERE id = textAsBlob(?);
PreparedStatement ps2 = session.prepare(selectcql);
BoundStatement bs2 = new BoundStatement(ps2);
bs2.bind(key);
ResultSet results = session.execute(bs2);

System.out.println(Got results...);

for (Row row : results) {
System.out.println(String.format(%-30s\t%-20s\t%-20s, 
row.getString(id), row.getDate(inserted_on), row.getString(lorem)));
}

{code}

And to query via CQLSH:


select * from testunicode where id = 0x73746174651c63617264 ;

 id | inserted_on  | lorem
+--+-
 0x73746174651c63617264 | 2015-02-11 20:32:20+ | Lorem ipsum dolor sit amet.



 Improve handling of non-printable unicode characters in text fields and CQLSH
 -

 Key: CASSANDRA-8790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8790
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 Currently to store a string/text value that contains a non-printable unicode 
 character and than subsequently be able to query it CQLSH I need to store the 
 field as a blob via the blobAsText and textAsBlob functions. 
 This is not really optimal - it would be better if CQLSH handled this rather 
 than having to model data around this limitation.
 For example:
 {code:title=Example Code|borderStyle=solid}
 String 

[jira] [Updated] (CASSANDRA-8618) Password stored in cqlshrc file does not work with % character

2015-01-15 Thread Johnny Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Miller updated CASSANDRA-8618:
-
Attachment: trunk-CASSANDRA-8168.txt

Here's a patch that has the change to use the RawConfigParser if the loss of 
support for interpolation on the password field only is acceptable

 Password stored in cqlshrc file does not work with % character
 --

 Key: CASSANDRA-8618
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8618
 Project: Cassandra
  Issue Type: Bug
Reporter: Johnny Miller
Priority: Trivial
 Attachments: trunk-CASSANDRA-8168.txt


 Passwords stored in the cqlshrc file that contain the % character do not work.
 For example: BD%^r9dSv!z
 The workaround is to escape it with an additional %
 e.g. BD%%^r9dSv!z
 It would be better if this was done automatically rather than having to add 
 escape characters to the cqlshrc file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8618) Password stored in cqlshrc file does not work with % character

2015-01-15 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278512#comment-14278512
 ] 

Johnny Miller commented on CASSANDRA-8618:
--

I've had a look at the code and am happy to pick this up. But I have a 
functional question around this. The % is used to support interpolation in the 
cqlshrc file via the Python SafeConfigParser. 

One approach would be to use RawConfigParser for the password field only, 
however this would break anyone who was using interpolation on the password 
field in the cqlshrc file but would leave the other fields supporting it.


 Password stored in cqlshrc file does not work with % character
 --

 Key: CASSANDRA-8618
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8618
 Project: Cassandra
  Issue Type: Bug
Reporter: Johnny Miller
Priority: Trivial

 Passwords stored in the cqlshrc file that contain the % character do not work.
 For example: BD%^r9dSv!z
 The workaround is to escape it with an additional %
 e.g. BD%%^r9dSv!z
 It would be better if this was done automatically rather than having to add 
 escape characters to the cqlshrc file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8626) Support for TTL on GRANT

2015-01-15 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-8626:


 Summary: Support for TTL on GRANT
 Key: CASSANDRA-8626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8626
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Trivial


There are a variety of situations where named users are only allowed to be 
granted temporary permissions to query Cassandra.

It would be useful if we supported the ability to attach a TTL to permissions 
so they automatically expire e.g GRANT SELECT ON ALL KEYSPACES TO johnny USING 
TTL 86400;




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8618) Password stored in cqlshrc file does not work with % character

2015-01-14 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-8618:


 Summary: Password stored in cqlshrc file does not work with % 
character
 Key: CASSANDRA-8618
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8618
 Project: Cassandra
  Issue Type: Bug
Reporter: Johnny Miller


Passwords stored in the cqlshrc file that contain the % character do not work.

For example: BD%^r9dSv!z

The workaround is to escape it with an additional %

e.g. BD%%^r9dSv!z

It would be better if this was done automatically rather than having to add 
escape characters to the cqlshrc file.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8369) Better error handling in CQLSH for invalid password

2014-11-24 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-8369:


 Summary: Better error handling in CQLSH for invalid password
 Key: CASSANDRA-8369
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8369
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor


On C* 2.0.11/Cqlsh 4.1.1 when logging with an invalid password you get back a 
stacktrace rather than a more user friendly error. It might be better if this 
was more user friendly.

For example - this is what you get back now:

root@cass1:~# cqlsh -u cassandra -p johnny
Traceback (most recent call last):
  File /usr/bin/cqlsh, line 2113, in module
main(*read_options(sys.argv[1:], os.environ))
  File /usr/bin/cqlsh, line 2093, in main
single_statement=options.execute)
  File /usr/bin/cqlsh, line 505, in __init__
password=password, cql_version=cqlver, transport=transport)
  File 
/usr/share/dse/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/connection.py,
 line 143, in connect
  File 
/usr/share/dse/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/connection.py,
 line 59, in __init__
  File 
/usr/share/dse/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/thrifteries.py,
 line 157, in establish_connection
  File 
/usr/share/dse/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/cassandra/Cassandra.py,
 line 465, in login
  File 
/usr/share/dse/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/cassandra/Cassandra.py,
 line 486, in recv_login
cql.cassandra.ttypes.AuthenticationException: 
AuthenticationException(why='Username and/or password are incorrect')



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8082) Support finer grained Modify CQL permissions

2014-10-09 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14164863#comment-14164863
 ] 

Johnny Miller commented on CASSANDRA-8082:
--

[~iamaleksey] [~salleman] I think TRUNCATE is the main point of concern. I 
understand the constraints around the upserts/deletes and the difficulty with 
doing anything on that, but if we could do something on TRUNCATE it would still 
help. Can we re-open this ticket or should I create a new one?

 Support finer grained Modify CQL permissions
 

 Key: CASSANDRA-8082
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8082
 Project: Cassandra
  Issue Type: New Feature
Reporter: Johnny Miller

 Currently CQL permissions are grouped as:
 ALL   - All statements
 ALTER - ALTER KEYSPACE, ALTER TABLE, CREATE INDEX, DROP INDEX
 AUTHORIZE - GRANT, REVOKE
 CREATE - CREATE KEYSPACE, CREATE TABLE
 DROP - DROP KEYSPACE, DROP TABLE
 MODIFY - INSERT, DELETE, UPDATE, TRUNCATE
 SELECT -SELECT
 The MODIFY permission is too wide. There are plenty scenarios where a user 
 should not be to DELETE and TRUNCATE a table but should be able to INSERT and 
 UPDATE. 
 It would be great if Cassandra could either support defining permissions 
 dynamically or have additional finer grained MODIFY related permissions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8082) Support finer grained Modify CQL permissions

2014-10-09 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14164863#comment-14164863
 ] 

Johnny Miller edited comment on CASSANDRA-8082 at 10/9/14 7:29 AM:
---

[~iamaleksey] [~slebresne] I think TRUNCATE is the main point of concern. I 
understand the constraints around the upserts/deletes and the difficulty with 
doing anything on that, but if we could do something on TRUNCATE it would still 
help. Can we re-open this ticket or should I create a new one?


was (Author: johnny15676):
[~iamaleksey] [~salleman] I think TRUNCATE is the main point of concern. I 
understand the constraints around the upserts/deletes and the difficulty with 
doing anything on that, but if we could do something on TRUNCATE it would still 
help. Can we re-open this ticket or should I create a new one?

 Support finer grained Modify CQL permissions
 

 Key: CASSANDRA-8082
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8082
 Project: Cassandra
  Issue Type: New Feature
Reporter: Johnny Miller

 Currently CQL permissions are grouped as:
 ALL   - All statements
 ALTER - ALTER KEYSPACE, ALTER TABLE, CREATE INDEX, DROP INDEX
 AUTHORIZE - GRANT, REVOKE
 CREATE - CREATE KEYSPACE, CREATE TABLE
 DROP - DROP KEYSPACE, DROP TABLE
 MODIFY - INSERT, DELETE, UPDATE, TRUNCATE
 SELECT -SELECT
 The MODIFY permission is too wide. There are plenty scenarios where a user 
 should not be to DELETE and TRUNCATE a table but should be able to INSERT and 
 UPDATE. 
 It would be great if Cassandra could either support defining permissions 
 dynamically or have additional finer grained MODIFY related permissions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8082) Support finer grained Modify CQL permissions

2014-10-08 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-8082:


 Summary: Support finer grained Modify CQL permissions
 Key: CASSANDRA-8082
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8082
 Project: Cassandra
  Issue Type: New Feature
Reporter: Johnny Miller


Currently CQL permissions are grouped as:

ALL - All statements
ALTER - ALTER KEYSPACE, ALTER TABLE, CREATE INDEX, DROP INDEX
AUTHORIZE - GRANT, REVOKE
CREATE - CREATE KEYSPACE, CREATE TABLE
DROP - DROP KEYSPACE, DROP TABLE
MODIFY - INSERT, DELETE, UPDATE, TRUNCATE
SELECT -SELECT

The MODIFY permission is too wide. There are plenty scenarios where a user 
should not be to DELETE and TRUNCATE a table but should be able to INSERT and 
UPDATE. 

It would be great if Cassandra could either support defining permissions 
dynamically or have additional finer grained MODIFY related permissions.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7653) Add role based access control to Cassandra

2014-10-08 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14163345#comment-14163345
 ] 

Johnny Miller commented on CASSANDRA-7653:
--

[~mikea] [~hansvanderlinde] I have created another JIRA (CASSANDRA-8082) for 
the finer grained permissions.

 Add role based access control to Cassandra
 --

 Key: CASSANDRA-7653
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7653
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Mike Adamson
Assignee: Mike Adamson
 Fix For: 3.0

 Attachments: 7653.patch


 The current authentication model supports granting permissions to individual 
 users. While this is OK for small or medium organizations wanting to 
 implement authorization, it does not work well in large organizations because 
 of the overhead of having to maintain the permissions for each user.
 Introducing roles into the authentication model would allow sets of 
 permissions to be controlled in one place as a role and then the role granted 
 to users. Roles should also be able to be granted to other roles to allow 
 hierarchical sets of permissions to be built up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-7463) Update CQLSSTableWriter to allow parallel writing of SSTables on the same table within the same JVM

2014-06-27 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-7463:


 Summary: Update CQLSSTableWriter to allow parallel writing of 
SSTables on the same table within the same JVM
 Key: CASSANDRA-7463
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7463
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller


Currently it is not possible to programatically write multiple SSTables for the 
same table in parallel using the CQLSSTableWriter. This is quite a limitation 
and the workaround of attempting to do this in a separate JVM is not a great 
solution.

See: 
http://stackoverflow.com/questions/24396902/using-cqlsstablewriter-concurrently



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7046) Update nodetool commands to output the date and time they were run on

2014-06-10 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14026657#comment-14026657
 ] 

Johnny Miller commented on CASSANDRA-7046:
--

[~clardeur] Do you know if this is making it into a release?

 Update nodetool commands to output the date and time they were run on
 -

 Key: CASSANDRA-7046
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7046
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Assignee: Clément Lardeur
Priority: Trivial
  Labels: lhf
 Attachments: trunk-7046-v1.patch


 It would help if the various nodetool commands also outputted the system date 
 time they were run. Often these commands are executed and then we look at the 
 cassandra log files to try and find out what was happening at that time. 
 This is certainly just a convenience feature, but it would be nice to have 
 the information in there to aid with diagnostics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7311) Enable incremental backup on a per-keyspace level

2014-05-30 Thread Johnny Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johnny Miller updated CASSANDRA-7311:
-

Labels: lhf  (was: )

 Enable incremental backup on a per-keyspace level
 -

 Key: CASSANDRA-7311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7311
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor
  Labels: lhf

 Currently incremental backups are globally defined, however this is not 
 always appropriate or required for all keyspaces in a cluster. 
 As this is quite expensive, it would be preferred to either specify the 
 keyspaces that need this (or exclude the ones that don't).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7311) Enable incremental backup on a per-keyspace level

2014-05-28 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-7311:


 Summary: Enable incremental backup on a per-keyspace level
 Key: CASSANDRA-7311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7311
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor


Currently incremental backups are globally defined, however this is not always 
appropriate or required for all keyspaces in a cluster. 

As this is quite expensive, it would be preferred to either specify the 
keyspaces that need this (or exclude the ones that don't).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7230) Add a size() function to return the number of elements in a collection

2014-05-15 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13997518#comment-13997518
 ] 

Johnny Miller commented on CASSANDRA-7230:
--

I have also encountered devs and admins asking for this.

 Add a size() function to return the number of elements in a collection
 --

 Key: CASSANDRA-7230
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7230
 Project: Cassandra
  Issue Type: New Feature
Reporter: Ron Cohen
Priority: Minor

 This has been asked in my training classes as an easy way to count the 
 elements in a collection.  If easy to implement, this would be an often used 
 feature.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7046) Update nodetool commands to output the date and time they were run on

2014-04-23 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13977966#comment-13977966
 ] 

Johnny Miller commented on CASSANDRA-7046:
--

Clement - thats great! Exactly what would be good to see in there.

 Update nodetool commands to output the date and time they were run on
 -

 Key: CASSANDRA-7046
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7046
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Assignee: Clément Lardeur
Priority: Trivial
  Labels: lhf
 Attachments: trunk-7046-v1.patch


 It would help if the various nodetool commands also outputted the system date 
 time they were run. Often these commands are executed and then we look at the 
 cassandra log files to try and find out what was happening at that time. 
 This is certainly just a convenience feature, but it would be nice to have 
 the information in there to aid with diagnostics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7046) Update nodetool commands to output the date and time they were run on

2014-04-22 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13976771#comment-13976771
 ] 

Johnny Miller commented on CASSANDRA-7046:
--

Clement - yes, it is possible to do this but it I don't agree that this 
information would not be valuable to have in the output of nodetool commands, 
particularly if we are trying to make them more user friendly. 

My experience of trying to help resolve issues is that this information is 
rarely to hand and makes trawling through logs to find out what was up somewhat 
challenging. 

 Update nodetool commands to output the date and time they were run on
 -

 Key: CASSANDRA-7046
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7046
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Trivial
  Labels: lhf

 It would help if the various nodetool commands also outputted the system date 
 time they were run. Often these commands are executed and then we look at the 
 cassandra log files to try and find out what was happening at that time. 
 This is certainly just a convenience feature, but it would be nice to have 
 the information in there to aid with diagnostics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7046) Update nodetool commands to output the date and time they were run on

2014-04-16 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-7046:


 Summary: Update nodetool commands to output the date and time they 
were run on
 Key: CASSANDRA-7046
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7046
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Trivial


It would help if the various nodetool commands also outputted the system date 
time they were run. Often these commands are executed and then we look at the 
cassandra log files to try and find out what was happening at that time. 

This is certainly just a convenience feature, but it would be nice to have the 
information in there to aid with diagnostics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2014-01-15 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872048#comment-13872048
 ] 

Johnny Miller commented on CASSANDRA-6538:
--

bq. OTOH, if it's just a single column, how hard is it really to SELECT it? If 
it's not a blob then it's not huge, and if it is a blob then computing the size 
is pretty straightforward.

True, but it would be nice to have this available via the CQL CLI. I can 
appreciate there are ways to work around this, but I do think its a nice 
(minor) feature to have in there.

 Provide a read-time CQL function to display the data size of columns and rows
 -

 Key: CASSANDRA-6538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 It would be extremely useful to be able to work out the size of rows and 
 columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2014-01-03 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13861408#comment-13861408
 ] 

Johnny Miller commented on CASSANDRA-6538:
--

[~slebresne] That would certainly be useful and if this issue should be 
dependent on CASSANDRA-4914 than it would be handy to have in the interim. 

However, would this mean we would only be able to find the size of an entire 
row and not an individual cell/column?

 Provide a read-time CQL function to display the data size of columns and rows
 -

 Key: CASSANDRA-6538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 It would be extremely useful to be able to work out the size of rows and 
 columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2014-01-03 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13861593#comment-13861593
 ] 

Johnny Miller commented on CASSANDRA-6538:
--

bq.That being said, if we talking of CQL column values, it's probably not too 
hard to add a sizeof method that would return the size of a value instead of 
the value itself (not talking of doing any aggregation here). Not convinced it 
would be extremely useful but it's not particularly crazy either.

Unfortunately there are times when no one knows what was written into the 
column in the first place, so it would help to have this data in there also. 
Also sometime organisationally the people looking after Cassandra are not the 
same people writing to it so it can be a challenge to work this out.

 Provide a read-time CQL function to display the data size of columns and rows
 -

 Key: CASSANDRA-6538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 It would be extremely useful to be able to work out the size of rows and 
 columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2014-01-02 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-6538:


 Summary: Provide a read-time CQL function to display the data size 
of columns and rows
 Key: CASSANDRA-6538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller


It would be extremely useful to be able to work out the size of rows and 
columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2014-01-02 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860622#comment-13860622
 ] 

Johnny Miller commented on CASSANDRA-6538:
--

When debugging issues in environments where the (suspicion) is that specific 
rows contain larger than expected data sizes and I am unable to write a client 
to read the data and check its size.

 Provide a read-time CQL function to display the data size of columns and rows
 -

 Key: CASSANDRA-6538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller

 It would be extremely useful to be able to work out the size of rows and 
 columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6257) Safety check on node joining cluster (based on last seen vs GC grace period) to avoid zombie data

2013-10-29 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13808410#comment-13808410
 ] 

Johnny Miller commented on CASSANDRA-6257:
--

Would it not be possible to have the node periodically heartbeat rather than 
only on boot? 

 Safety check on node joining cluster (based on last seen vs GC grace period) 
 to avoid zombie data
 -

 Key: CASSANDRA-6257
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6257
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 When a node is rejoining a cluster, it would be nice to have some form of 
 safety check that the cluster recognises the last time the node was part of 
 the cluster is greater that the GC grace period and therefore should not be 
 able to rejoin the cluster unless the administrator specifically requests it.
 The goal of this is to help avoid the potential issues with deleted data 
 coming back from the rejoining nodes dataset.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6257) Safety check on node joining cluster (based on last seen vs GC grace period) to avoid zombie data

2013-10-29 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13808425#comment-13808425
 ] 

Johnny Miller commented on CASSANDRA-6257:
--

Is there anything the rejoining node itself can inspect locally so it can 
determine when it was last part of the cluster and use that instead?

 Safety check on node joining cluster (based on last seen vs GC grace period) 
 to avoid zombie data
 -

 Key: CASSANDRA-6257
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6257
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 When a node is rejoining a cluster, it would be nice to have some form of 
 safety check that the cluster recognises the last time the node was part of 
 the cluster is greater that the GC grace period and therefore should not be 
 able to rejoin the cluster unless the administrator specifically requests it.
 The goal of this is to help avoid the potential issues with deleted data 
 coming back from the rejoining nodes dataset.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6257) Safety check on node joining cluster (based on last seen vs GC grace period) to avoid zombie data

2013-10-28 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-6257:


 Summary: Safety check on node joining cluster (based on last seen 
vs GC grace period) to avoid zombie data
 Key: CASSANDRA-6257
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6257
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor


When a node is rejoining a cluster, it would be nice to have some form of 
safety check that the cluster recognises the last time the node was part of the 
cluster is greater that the GC grace period and therefore should not be able to 
rejoin the cluster unless the administrator specifically requests it.

The goal of this is to help avoid the potential issues with deleted data coming 
back from the rejoining nodes dataset.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6257) Safety check on node joining cluster (based on last seen vs GC grace period) to avoid zombie data

2013-10-28 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807031#comment-13807031
 ] 

Johnny Miller commented on CASSANDRA-6257:
--

Just to provide some clarification on the scenario behind this.

The recommended action when returning a node to a cluster when it has been out 
for longer than the GC grace period is to remove the node, wipe its data and 
then rejoin. The purpose of this improvement is to ensure that if this isn't 
done, we at least warn the admin unless they explicitly want it.

 Safety check on node joining cluster (based on last seen vs GC grace period) 
 to avoid zombie data
 -

 Key: CASSANDRA-6257
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6257
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Assignee: Tyler Hobbs
Priority: Minor

 When a node is rejoining a cluster, it would be nice to have some form of 
 safety check that the cluster recognises the last time the node was part of 
 the cluster is greater that the GC grace period and therefore should not be 
 able to rejoin the cluster unless the administrator specifically requests it.
 The goal of this is to help avoid the potential issues with deleted data 
 coming back from the rejoining nodes dataset.



--
This message was sent by Atlassian JIRA
(v6.1#6144)