[jira] [Commented] (CASSANDRA-4967) config options have different bounds when set via different methods

2015-09-24 Thread John Sumsion (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905876#comment-14905876
 ] 

John Sumsion commented on CASSANDRA-4967:
-

I am part-way down revamping the validation / defaults logic for config.  See 
this branch on github:
- https://github.com/jdsumsion/cassandra/tree/4967-config-validation

If I'm going the wrong direction, please let me know soon, as I want to wrap 
this up by the end of the summit.

> config options have different bounds when set via different methods
> ---
>
> Key: CASSANDRA-4967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4967
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.2.0 beta 2
>Reporter: Robert Coli
>Priority: Minor
>  Labels: lhf
>
> (similar to some of the work done in 
> https://issues.apache.org/jira/browse/CASSANDRA-4479
> )
> If one sets a value in cassandra.yaml, that value might be subject to bounds 
> checking there. However if one sets that same value via JMX, it doesn't get 
> set via a bounds-checking code path.
> "./src/java/org/apache/cassandra/config/DatabaseDescriptor.java" (JMX set)
> {noformat}
> public static void setPhiConvictThreshold(double phiConvictThreshold)
> {
> conf.phi_convict_threshold = phiConvictThreshold;
> }
> {noformat}
> Versus..
> ./src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
> (cassandra.yaml)
> {noformat}
> static void loadYaml()
> ...
>   /* phi convict threshold for FailureDetector */
> if (conf.phi_convict_threshold < 5 || conf.phi_convict_threshold 
> > 16)
> {
> throw new ConfigurationException("phi_convict_threshold must 
> be between 5 and 16");
> }
> {noformat}
> This seems to create a confusing situation where the range of potential 
> values for a given configuration option is different when set by different 
> methods. 
> It's difficult to imagine a circumstance where you want bounds checking to 
> keep your node from starting if you set that value in cassandra.yaml, but 
> also want to allow circumvention of that bounds checking if you set via JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10393) LEAK DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref)

2015-09-24 Thread Christian Winther (JIRA)
Christian Winther created CASSANDRA-10393:
-

 Summary: LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref)
 Key: CASSANDRA-10393
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10393
 Project: Cassandra
  Issue Type: Bug
 Environment: v 2.2.1 (from apt)

-> lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:Debian GNU/Linux 7.8 (wheezy)
Release:7.8
Codename:   wheezy

-> java -version
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)

Reporter: Christian Winther


When trying to repair full on a table with the following schema my nodes stall 
and end up with spamming this 

I've recently changed the table from SizeTieredCompactionStrategy to 
LeveledCompactionStrategy.

Coming from 2.1.9 -> 2.2.0 -> 2.2.1 i ran upgradesstable without issue as well

When trying to full repair post compaction change, I got "out of order" errors. 
A few google searches later, I was told to "scrub" the keyspace - did that 
during the night (no problems logged, and no data lost)

Now a repair just stalls and output memory leaks all over the place 

{code}
CREATE KEYSPACE sessions WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '3'}  AND durable_writes = true;

CREATE TABLE sessions.sessions (
id text PRIMARY KEY,
client_ip text,
controller text,
controller_action text,
created timestamp,
data text,
expires timestamp,
http_host text,
modified timestamp,
request_agent text,
request_agent_bot boolean,
request_path text,
site_id int,
user_id int
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"NONE", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
{code}


ERROR [Reference-Reaper:1] 2015-09-24 10:25:28,475 Ref.java:187 - LEAK 
DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@4428a373) to class 
org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@184765:/data/1/cassandra/sessions/sessions-77dd22f0ab9711e49cbc410c6b6f53a6/la-104037-big
 was not released before the reference was garbage collected
ERROR [Reference-Reaper:1] 2015-09-24 10:25:28,475 Ref.java:187 - LEAK 
DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State@368dd97) 
to class 
org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@184765:/data/1/cassandra/sessions/sessions-77dd22f0ab9711e49cbc410c6b6f53a6/la-104037-big
 was not released before the reference was garbage collected
ERROR [Reference-Reaper:1] 2015-09-24 10:25:28,475 Ref.java:187 - LEAK 
DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@66fb78be) to class 
org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@184765:/data/1/cassandra/sessions/sessions-77dd22f0ab9711e49cbc410c6b6f53a6/la-104037-big
 was not released before the reference was garbage collected
ERROR [Reference-Reaper:1] 2015-09-24 10:25:28,475 Ref.java:187 - LEAK 
DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State@9fdd2e6) 
to class 
org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1460906269:/data/1/cassandra/sessions/sessions-77dd22f0ab9711e49cbc410c6b6f53a6/la-104788-big
 was not released before the reference was garbage collected
ERROR [Reference-Reaper:1] 2015-09-24 10:25:28,475 Ref.java:187 - LEAK 
DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State@84fcb91) 
to class 
org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1460906269:/data/1/cassandra/sessions/sessions-77dd22f0ab9711e49cbc410c6b6f53a6/la-104788-big
 was not released before the reference was garbage collected




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10393) LEAK DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref)

2015-09-24 Thread Christian Winther (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Winther updated CASSANDRA-10393:
--
Description: 
When trying to repair full on a table with the following schema my nodes stall 
and end up with spamming this 

I've recently changed the table from SizeTieredCompactionStrategy to 
LeveledCompactionStrategy.

Coming from 2.1.9 -> 2.2.0 -> 2.2.1 i ran upgradesstable without issue as well

When trying to full repair post compaction change, I got "out of order" errors. 
A few google searches later, I was told to "scrub" the keyspace - did that 
during the night (no problems logged, and no data lost)

Now a repair just stalls and output memory leaks all over the place 

{code}
CREATE KEYSPACE sessions WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '3'}  AND durable_writes = true;

CREATE TABLE sessions.sessions (
id text PRIMARY KEY,
client_ip text,
controller text,
controller_action text,
created timestamp,
data text,
expires timestamp,
http_host text,
modified timestamp,
request_agent text,
request_agent_bot boolean,
request_path text,
site_id int,
user_id int
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"NONE", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
{code}

{code}
ERROR [Reference-Reaper:1] 2015-09-24 10:25:28,475 Ref.java:187 - LEAK 
DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@4428a373) to class 
org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@184765:/data/1/cassandra/sessions/sessions-77dd22f0ab9711e49cbc410c6b6f53a6/la-104037-big
 was not released before the reference was garbage collected
ERROR [Reference-Reaper:1] 2015-09-24 10:25:28,475 Ref.java:187 - LEAK 
DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State@368dd97) 
to class 
org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@184765:/data/1/cassandra/sessions/sessions-77dd22f0ab9711e49cbc410c6b6f53a6/la-104037-big
 was not released before the reference was garbage collected
ERROR [Reference-Reaper:1] 2015-09-24 10:25:28,475 Ref.java:187 - LEAK 
DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@66fb78be) to class 
org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@184765:/data/1/cassandra/sessions/sessions-77dd22f0ab9711e49cbc410c6b6f53a6/la-104037-big
 was not released before the reference was garbage collected
ERROR [Reference-Reaper:1] 2015-09-24 10:25:28,475 Ref.java:187 - LEAK 
DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State@9fdd2e6) 
to class 
org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1460906269:/data/1/cassandra/sessions/sessions-77dd22f0ab9711e49cbc410c6b6f53a6/la-104788-big
 was not released before the reference was garbage collected
ERROR [Reference-Reaper:1] 2015-09-24 10:25:28,475 Ref.java:187 - LEAK 
DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State@84fcb91) 
to class 
org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1460906269:/data/1/cassandra/sessions/sessions-77dd22f0ab9711e49cbc410c6b6f53a6/la-104788-big
 was not released before the reference was garbage collected
{code}

  was:
When trying to repair full on a table with the following schema my nodes stall 
and end up with spamming this 

I've recently changed the table from SizeTieredCompactionStrategy to 
LeveledCompactionStrategy.

Coming from 2.1.9 -> 2.2.0 -> 2.2.1 i ran upgradesstable without issue as well

When trying to full repair post compaction change, I got "out of order" errors. 
A few google searches later, I was told to "scrub" the keyspace - did that 
during the night (no problems logged, and no data lost)

Now a repair just stalls and output memory leaks all over the place 

{code}
CREATE KEYSPACE sessions WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '3'}  AND durable_writes = true;

CREATE TABLE sessions.sessions (
id text PRIMARY KEY,
client_ip text,
controller text,
controller_action text,
created timestamp,
data text,
expires timestamp,
http_host text,
modified timestamp,
request_agent text,
request_agent_bot boolean,
request_path text,
site_id int,
user_id int
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"NONE", 

[jira] [Commented] (CASSANDRA-8803) Implement transitional mode in C* that will accept both encrypted and non-encrypted client traffic

2015-09-24 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906156#comment-14906156
 ] 

Norman Maurer commented on CASSANDRA-8803:
--

[~brandon.williams] I have a patch here that I would like to submit to allow 
serve SSL and non SSL on the same port without the need for STARTTLS etc. This 
will make things a lot easier. Should I just reopen this issue and attach the 
patch here or what ?

> Implement transitional mode in C* that will accept both encrypted and 
> non-encrypted client traffic
> --
>
> Key: CASSANDRA-8803
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8803
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Vishy Kasar
>
> We have some non-secure clusters taking live traffic in production from 
> active clients. We want to enable client to node encryption on these 
> clusters. Once we set the client_encryption_options enabled to true in yaml 
> and bounce a cassandra node in the ring, the existing clients that do not do 
> SSL will fail to connect to that node.
> There does not seem to be a good way to roll this change with out taking an 
> outage. Can we implement a transitional mode in C* that will accept both 
> encrypted and non-encrypted client traffic? We would enable this during 
> transition and turn it off after both server and client start talking SSL. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10382) nodetool info doesn't show the correct DC and RACK

2015-09-24 Thread Ruggero Marchei (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905951#comment-14905951
 ] 

Ruggero Marchei commented on CASSANDRA-10382:
-

I suppose after [~carlyeks] comment cassandra.yaml and the snitch properties 
file are not needed anymore

> nodetool info doesn't show the correct DC and RACK
> --
>
> Key: CASSANDRA-10382
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10382
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.2.1
> GossipingPropertyFileSnitch
>Reporter: Ruggero Marchei
>Assignee: Carl Yeksigian
>Priority: Minor
>
> When running *nodetool info* cassandra returns UNKNOWN_DC and UNKNOWN_RACK:
> {code}
> # nodetool info
> ID : b94f9ca0-f886-4111-a471-02f295573f37
> Gossip active  : true
> Thrift active  : true
> Native Transport active: true
> Load   : 44.97 MB
> Generation No  : 1442913138
> Uptime (seconds)   : 5386
> Heap Memory (MB)   : 429.07 / 3972.00
> Off Heap Memory (MB)   : 0.08
> Data Center: UNKNOWN_DC
> Rack   : UNKNOWN_RACK
> Exceptions : 1
> Key Cache  : entries 642, size 58.16 KB, capacity 100 MB, 5580 
> hits, 8320 requests, 0.671 recent hit rate, 14400 save period in seconds
> Row Cache  : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> Counter Cache  : entries 0, size 0 bytes, capacity 50 MB, 0 hits, 0 
> requests, NaN recent hit rate, 7200 save period in seconds
> Token  : (invoke with -T/--tokens to see all 256 tokens)
> {code}
> Correct DCs and RACKs are returned by *nodetool status* and *nodetool 
> gossipinfo* commands:
> {code}
> # nodetool gossipinfo|grep -E 'RACK|DC'
>   DC:POZ
>   RACK:RACK30
>   DC:POZ
>   RACK:RACK30
>   DC:SJC
>   RACK:RACK68
>   DC:POZ
>   RACK:RACK30
>   DC:SJC
>   RACK:RACK62
>   DC:SJC
>   RACK:RACK62
> {code}
> {code}
> # nodetool status|grep Datacenter
> Datacenter: SJC
> Datacenter: POZ
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10389) Repair session exception Validation failed

2015-09-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906018#comment-14906018
 ] 

Jędrzej Sieracki commented on CASSANDRA-10389:
--

After checking the logs more thoroughly, the issue seems to be "Cannot start 
multiple repair sessions over the same sstables".

The interesting log portions from repair session run on cblade1:

{quote}
INFO  [Repair#24:1] 2015-09-24 09:58:37,480 RepairJob.java:107 - [repair 
#0fc98340-6292-11e5-b992-9f13fa8664c8] requesting merkle trees for 
stock_increment_agg (to [/cblade10, cblade1])
INFO  [Repair#24:1] 2015-09-24 09:58:37,480 RepairJob.java:181 - [repair 
#0fc98340-6292-11e5-b992-9f13fa8664c8] Requesting merkle trees for 
stock_increment_agg (to [/cblade10, cblade1])
ERROR [ValidationExecutor:28] 2015-09-24 09:58:37,481 
CompactionManager.java:1070 - Cannot start multiple repair sessions over the 
same sstables
ERROR [ValidationExecutor:28] 2015-09-24 09:58:37,481 Validator.java:246 - 
Failed creating a merkle tree for [repair #0fc98340-6292-11e5-b992-9f13fa8664c8 
on perspectiv/stock_increment_agg, 
(-5927186132136652665,-5917344746039874798]], /cblade1(see log for details)
INFO  [AntiEntropyStage:1] 2015-09-24 09:58:37,481 RepairSession.java:181 - 
[repair #0fc98340-6292-11e5-b992-9f13fa8664c8] Received merkle tree for 
stock_increment_agg from /cblade1
ERROR [ValidationExecutor:28] 2015-09-24 09:58:37,481 CassandraDaemon.java:183 
- Exception in thread Thread[ValidationExecutor:28,1,main]
java.lang.RuntimeException: Cannot start multiple repair sessions over the same 
sstables
at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1071)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
at 
org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:94)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
at 
org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:669)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_60]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_60]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
WARN  [RepairJobTask:1] 2015-09-24 09:58:37,481 RepairJob.java:162 - [repair 
#0fc98340-6292-11e5-b992-9f13fa8664c8] stock_increment_agg sync failed
ERROR [RepairJobTask:2] 2015-09-24 09:58:37,482 CassandraDaemon.java:183 - 
Exception in thread Thread[RepairJobTask:2,5,RMI Runtime]
org.apache.cassandra.exceptions.RepairException: [repair 
#0fc98340-6292-11e5-b992-9f13fa8664c8 on perspectiv/stock_increment_agg, 
(-5927186132136652665,-5917344746039874798]] Validation failed in 
cblade1.dforcom.localdomain/cblade1
at 
org.apache.cassandra.repair.ValidationTask.treeReceived(ValidationTask.java:64) 
~[apache-cassandra-2.2.1.jar:2.2.1]
at 
org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:183)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
at 
org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:399)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:158)
 ~[apache-cassandra-2.2.1.jar:2.2.1]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) 
~[apache-cassandra-2.2.1.jar:2.2.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_60]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
INFO  [Repair#24:2] 2015-09-24 09:58:37,482 RepairJob.java:107 - [repair 
#0fc98340-6292-11e5-b992-9f13fa8664c8] requesting merkle trees for 
receipt_agg_total (to [/cblade10, cblade1.dforcom.localdomain/cblade1])
ERROR [Repair#24:1] 2015-09-24 09:58:37,482 CassandraDaemon.java:183 - 
Exception in thread Thread[Repair#24:1,5,RMI Runtime]
com.google.common.util.concurrent.UncheckedExecutionException: 
org.apache.cassandra.exceptions.RepairException: [repair 
#0fc98340-6292-11e5-b992-9f13fa8664c8 on perspectiv/stock_increment_agg, 
(-5927186132136652665,-5917344746039874798]] Validation failed in 
cblade1.dforcom.localdomain/cblade1
at 
com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1387)
 ~[guava-16.0.jar:na]
at 
com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1373) 
~[guava-16.0.jar:na]
at org.apache.cassandra.repair.RepairJob.run(RepairJob.java:169) 
~[apache-cassandra-2.2.1.jar:2.2.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 

[jira] [Commented] (CASSANDRA-9748) Can't see other nodes when using multiple network interfaces

2015-09-24 Thread Roman Bielik (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905963#comment-14905963
 ] 

Roman Bielik commented on CASSANDRA-9748:
-

Hi, I tried the port forwarding and the telnet to public IP port 7000 works OK, 
but I still get the "Unable to gossip..." error on the second node.
Anyway the analysis seems perfectly reasonable, thank you. That would explain 
why in all tutorials the private/public IP configuration was always used in 
combination with EC2MultiRegionSnitch.

Thank you, it would be great if the listen_address could be set to all 
interfaces (0.0.0.0) as it would probably solve this issue.


> Can't see other nodes when using multiple network interfaces
> 
>
> Key: CASSANDRA-9748
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9748
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.0.16; multi-DC configuration
>Reporter: Roman Bielik
>Assignee: Paulo Motta
> Attachments: system_node1.log, system_node2.log
>
>
> The idea is to setup a multi-DC environment across 2 different networks based 
> on the following configuration recommendations:
> http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configMultiNetworks.html
> Each node has 2 network interfaces. One used as a private network (DC1: 
> 10.0.1.x and DC2: 10.0.2.x). The second one a "public" network where all 
> nodes can see each other (this one has a higher latency). 
> Using the following settings in cassandra.yaml:
> *seeds:* public IP (same as used in broadcast_address)
> *listen_address:* private IP
> *broadcast_address:* public IP
> *rpc_address:* 0.0.0.0
> *endpoint_snitch:* GossipingPropertyFileSnitch
> _(tried different combinations with no luck)_
> No firewall and no SSL/encryption used.
> The problem is that nodes do not see each other (a gossip problem I guess). 
> The nodetool ring/status shows only the local node but not the other ones 
> (even from the same DC).
> When I set listen_address to public IP, then everything works fine, but that 
> is not the required configuration.
> _Note: Not using EC2 cloud!_
> netstat -anp | grep -E "(7199|9160|9042|7000)"
> tcp0  0 0.0.0.0:71990.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 10.0.1.1:9160   0.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 10.0.1.1:9042   0.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 10.0.1.1:7000   0.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 127.0.0.1:7199  127.0.0.1:52874 
> ESTABLISHED 3587/java   
> tcp0  0 10.0.1.1:7199   10.0.1.1:39650  
> ESTABLISHED 3587/java 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8803) Implement transitional mode in C* that will accept both encrypted and non-encrypted client traffic

2015-09-24 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906349#comment-14906349
 ] 

Brandon Williams commented on CASSANDRA-8803:
-

Let's make a new one and we can link it here.

> Implement transitional mode in C* that will accept both encrypted and 
> non-encrypted client traffic
> --
>
> Key: CASSANDRA-8803
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8803
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Vishy Kasar
>
> We have some non-secure clusters taking live traffic in production from 
> active clients. We want to enable client to node encryption on these 
> clusters. Once we set the client_encryption_options enabled to true in yaml 
> and bounce a cassandra node in the ring, the existing clients that do not do 
> SSL will fail to connect to that node.
> There does not seem to be a good way to roll this change with out taking an 
> outage. Can we implement a transitional mode in C* that will accept both 
> encrypted and non-encrypted client traffic? We would enable this during 
> transition and turn it off after both server and client start talking SSL. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10394) Mixed case usernames do not work

2015-09-24 Thread William Streaker (JIRA)
William Streaker created CASSANDRA-10394:


 Summary: Mixed case usernames do not work
 Key: CASSANDRA-10394
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10394
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Centos 7, Cassandra 2.2.1
Reporter: William Streaker
Priority: Critical
 Fix For: 2.1.9


When you create a user with a mixed case username it is stored as all lower 
case.  When you try and login with the mixed case username it will fail, but 
logging in as the lower case name works.   This is a change from the 2.1.x 
versions that are released where mixed case usernames worked.

example:
CREATE USER stBarts WITH PASSWORD 'island';   
The above statement changes the username to "stbarts".

This would not be so bad except during login case does matter and has to match 
what is stored in the system.   
Recommended fix: allow mixed case usernames to be stored in system, or convert 
mixed case username entered to lower case during login.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10394) Mixed case usernames do not work

2015-09-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-10394:
---

Assignee: Sam Tunnicliffe

> Mixed case usernames do not work
> 
>
> Key: CASSANDRA-10394
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10394
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Centos 7, Cassandra 2.2.1
>Reporter: William Streaker
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.1.x
>
>
> When you create a user with a mixed case username it is stored as all lower 
> case.  When you try and login with the mixed case username it will fail, but 
> logging in as the lower case name works.   This is a change from the 2.1.x 
> versions that are released where mixed case usernames worked.
> example:
> CREATE USER stBarts WITH PASSWORD 'island';   
> The above statement changes the username to "stbarts".
> This would not be so bad except during login case does matter and has to 
> match what is stored in the system.   
> Recommended fix: allow mixed case usernames to be stored in system, or 
> convert mixed case username entered to lower case during login.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10394) Mixed case usernames do not work

2015-09-24 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906452#comment-14906452
 ] 

Sam Tunnicliffe commented on CASSANDRA-10394:
-

It is a regression introduced in CASSANDRA-7653. I'll post a patch shortly, but 
it should be noted that {{CREATE USER}} is deprecated in favour of {{CREATE 
ROLE}} in 2.2

> Mixed case usernames do not work
> 
>
> Key: CASSANDRA-10394
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10394
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Centos 7, Cassandra 2.2.1
>Reporter: William Streaker
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.1.x
>
>
> When you create a user with a mixed case username it is stored as all lower 
> case.  When you try and login with the mixed case username it will fail, but 
> logging in as the lower case name works.   This is a change from the 2.1.x 
> versions that are released where mixed case usernames worked.
> example:
> CREATE USER stBarts WITH PASSWORD 'island';   
> The above statement changes the username to "stbarts".
> This would not be so bad except during login case does matter and has to 
> match what is stored in the system.   
> Recommended fix: allow mixed case usernames to be stored in system, or 
> convert mixed case username entered to lower case during login.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10394) Mixed case usernames do not work

2015-09-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-10394:

Fix Version/s: (was: 2.1.x)
   2.2.x

> Mixed case usernames do not work
> 
>
> Key: CASSANDRA-10394
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10394
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Centos 7, Cassandra 2.2.1
>Reporter: William Streaker
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.2.x
>
>
> When you create a user with a mixed case username it is stored as all lower 
> case.  When you try and login with the mixed case username it will fail, but 
> logging in as the lower case name works.   This is a change from the 2.1.x 
> versions that are released where mixed case usernames worked.
> example:
> CREATE USER stBarts WITH PASSWORD 'island';   
> The above statement changes the username to "stbarts".
> This would not be so bad except during login case does matter and has to 
> match what is stored in the system.   
> Recommended fix: allow mixed case usernames to be stored in system, or 
> convert mixed case username entered to lower case during login.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10394) Mixed case usernames do not work

2015-09-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-10394:

Reproduced In: 2.2.1, 2.2.0  (was: 2.1.9)

> Mixed case usernames do not work
> 
>
> Key: CASSANDRA-10394
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10394
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Centos 7, Cassandra 2.2.1
>Reporter: William Streaker
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.2.x
>
>
> When you create a user with a mixed case username it is stored as all lower 
> case.  When you try and login with the mixed case username it will fail, but 
> logging in as the lower case name works.   This is a change from the 2.1.x 
> versions that are released where mixed case usernames worked.
> example:
> CREATE USER stBarts WITH PASSWORD 'island';   
> The above statement changes the username to "stbarts".
> This would not be so bad except during login case does matter and has to 
> match what is stored in the system.   
> Recommended fix: allow mixed case usernames to be stored in system, or 
> convert mixed case username entered to lower case during login.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10394) Mixed case usernames do not work

2015-09-24 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906432#comment-14906432
 ] 

Philip Thompson commented on CASSANDRA-10394:
-

You're saying this is a new regression in 2.1.9?

If you do {{CREATE USER "stBarts" WITH PASSWORD 'island';}}, is the mixed case 
preserved?

> Mixed case usernames do not work
> 
>
> Key: CASSANDRA-10394
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10394
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Centos 7, Cassandra 2.2.1
>Reporter: William Streaker
>Priority: Critical
> Fix For: 2.1.x
>
>
> When you create a user with a mixed case username it is stored as all lower 
> case.  When you try and login with the mixed case username it will fail, but 
> logging in as the lower case name works.   This is a change from the 2.1.x 
> versions that are released where mixed case usernames worked.
> example:
> CREATE USER stBarts WITH PASSWORD 'island';   
> The above statement changes the username to "stbarts".
> This would not be so bad except during login case does matter and has to 
> match what is stored in the system.   
> Recommended fix: allow mixed case usernames to be stored in system, or 
> convert mixed case username entered to lower case during login.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10394) Mixed case usernames do not work

2015-09-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-10394:

Reproduced In: 2.1.9
Fix Version/s: (was: 2.1.9)
   2.1.x

> Mixed case usernames do not work
> 
>
> Key: CASSANDRA-10394
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10394
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Centos 7, Cassandra 2.2.1
>Reporter: William Streaker
>Priority: Critical
> Fix For: 2.1.x
>
>
> When you create a user with a mixed case username it is stored as all lower 
> case.  When you try and login with the mixed case username it will fail, but 
> logging in as the lower case name works.   This is a change from the 2.1.x 
> versions that are released where mixed case usernames worked.
> example:
> CREATE USER stBarts WITH PASSWORD 'island';   
> The above statement changes the username to "stbarts".
> This would not be so bad except during login case does matter and has to 
> match what is stored in the system.   
> Recommended fix: allow mixed case usernames to be stored in system, or 
> convert mixed case username entered to lower case during login.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10074) cqlsh HELP SELECT_EXPR gives outdated incorrect information

2015-09-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906606#comment-14906606
 ] 

Stefania commented on CASSANDRA-10074:
--

+1

> cqlsh HELP SELECT_EXPR gives outdated incorrect information
> ---
>
> Key: CASSANDRA-10074
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10074
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: 3.0.0-alpha1-SNAPSHOT
>Reporter: Jim Meyer
>Assignee: Philip Thompson
>Priority: Trivial
>  Labels: cqlsh, lhf
> Fix For: 3.x
>
> Attachments: 10074.txt
>
>
> Within cqlsh, the HELP SELECT_EXPR states that COUNT is the only function 
> supported by CQL.
> It is missing a description of the SUM, AVG, MIN, and MAX built in functions.
> It should probably also mention that user defined functions can be invoked 
> via SELECT.
> The outdated text is in pylib/cqlshlib/helptopics.py under def 
> help_select_expr



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-09-24 Thread jmckenzie
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/52a10696
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/52a10696
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/52a10696

Branch: refs/heads/trunk
Commit: 52a10696aeb88bf055ff1800ab6a557598bac7a5
Parents: a2b1b8a 2e87c43
Author: Joshua McKenzie 
Authored: Thu Sep 24 09:44:04 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 09:44:04 2015 -0700

--
 pylib/cqlshlib/helptopics.py | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)
--




cassandra git commit: Fix HELP SELECT_EXPR output in cqlsh

2015-09-24 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 aa60cde31 -> 2e87c433a


Fix HELP SELECT_EXPR output in cqlsh

Patch by pthompson; reviewed by stefania for CASSANDRA-10074


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e87c433
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e87c433
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e87c433

Branch: refs/heads/cassandra-3.0
Commit: 2e87c433a957bebe8e9578bca81ee7db6b50c87b
Parents: aa60cde
Author: Philip Thompson 
Authored: Thu Sep 24 09:43:29 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 09:43:29 2015 -0700

--
 pylib/cqlshlib/helptopics.py | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e87c433/pylib/cqlshlib/helptopics.py
--
diff --git a/pylib/cqlshlib/helptopics.py b/pylib/cqlshlib/helptopics.py
index 8e296e1..b5cf09a 100644
--- a/pylib/cqlshlib/helptopics.py
+++ b/pylib/cqlshlib/helptopics.py
@@ -797,6 +797,7 @@ class CQL3HelpTopics(CQLHelpTopics):
 
   SELECT name1, name2, name3 FROM ...
   SELECT COUNT(*) FROM ...
+  SELECT MIN(name1), MAX(name2), SUM(name3), AVG(name4) FROM ...
 
 The SELECT expression determines which columns will appear in the
 results and takes the form of a comma separated list of names.
@@ -810,7 +811,14 @@ class CQL3HelpTopics(CQLHelpTopics):
 single row will be returned, with a single column named "count" whose
 value is the number of rows from the pre-aggregation resultset.
 
-Currently, COUNT is the only function supported by CQL.
+The MAX and MIN functions can be used to compute the maximum and the
+minimum value returned by a query for a given column.
+
+The SUM function can be used to sum up all the values returned by
+a query for a given column.
+
+The AVG function can be used to compute the average of all the
+values returned by a query for a given column.
 """
 
 def help_alter_drop(self):



[1/2] cassandra git commit: Fix HELP SELECT_EXPR output in cqlsh

2015-09-24 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk a2b1b8abc -> 52a10696a


Fix HELP SELECT_EXPR output in cqlsh

Patch by pthompson; reviewed by stefania for CASSANDRA-10074


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e87c433
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e87c433
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e87c433

Branch: refs/heads/trunk
Commit: 2e87c433a957bebe8e9578bca81ee7db6b50c87b
Parents: aa60cde
Author: Philip Thompson 
Authored: Thu Sep 24 09:43:29 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 09:43:29 2015 -0700

--
 pylib/cqlshlib/helptopics.py | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e87c433/pylib/cqlshlib/helptopics.py
--
diff --git a/pylib/cqlshlib/helptopics.py b/pylib/cqlshlib/helptopics.py
index 8e296e1..b5cf09a 100644
--- a/pylib/cqlshlib/helptopics.py
+++ b/pylib/cqlshlib/helptopics.py
@@ -797,6 +797,7 @@ class CQL3HelpTopics(CQLHelpTopics):
 
   SELECT name1, name2, name3 FROM ...
   SELECT COUNT(*) FROM ...
+  SELECT MIN(name1), MAX(name2), SUM(name3), AVG(name4) FROM ...
 
 The SELECT expression determines which columns will appear in the
 results and takes the form of a comma separated list of names.
@@ -810,7 +811,14 @@ class CQL3HelpTopics(CQLHelpTopics):
 single row will be returned, with a single column named "count" whose
 value is the number of rows from the pre-aggregation resultset.
 
-Currently, COUNT is the only function supported by CQL.
+The MAX and MIN functions can be used to compute the maximum and the
+minimum value returned by a query for a given column.
+
+The SUM function can be used to sum up all the values returned by
+a query for a given column.
+
+The AVG function can be used to compute the average of all the
+values returned by a query for a given column.
 """
 
 def help_alter_drop(self):



[jira] [Assigned] (CASSANDRA-10390) inconsistent quoted identifier handling in UDTs

2015-09-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-10390:
---

Assignee: Sam Tunnicliffe  (was: Benjamin Lerer)

> inconsistent quoted identifier handling in UDTs
> ---
>
> Key: CASSANDRA-10390
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10390
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.2.1
>Reporter: Jonathan Halliday
>Assignee: Sam Tunnicliffe
> Fix For: 2.2.x
>
>
> > create keyspace test with replication = {'class': 'SimpleStrategy', 
> > 'replication_factor': 1 } ;
> > create type if not exists mytype ("my.field" text);
> > desc keyspace; -- observe that mytype is listed
> > create table mytable (pk int primary key, myfield frozen);
> > desc keyspace; -- observe that mytype is listed, but mytable is not.
> > select * from mytable;
> ValueError: Type names and field names can only contain alphanumeric 
> characters and underscores: 'my.field'
> create table myothertable (pk int primary key, "my.field" text);
> select * from myothertable; -- valid
> huh? It's valid to create a field of a table, or a field of a type, with a 
> quoted name containing non-alpha chars, but it's not valid to use a such a 
> type in a table?  I can just about live with that though it seems 
> unnecessarily restrictive, but allowing creation of such a table and then 
> making it invisible/unusable definitely seems wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10382) nodetool info doesn't show the correct DC and RACK

2015-09-24 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-10382:
-
Reviewer: Stefania

> nodetool info doesn't show the correct DC and RACK
> --
>
> Key: CASSANDRA-10382
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10382
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.2.1
> GossipingPropertyFileSnitch
>Reporter: Ruggero Marchei
>Assignee: Carl Yeksigian
>Priority: Minor
>
> When running *nodetool info* cassandra returns UNKNOWN_DC and UNKNOWN_RACK:
> {code}
> # nodetool info
> ID : b94f9ca0-f886-4111-a471-02f295573f37
> Gossip active  : true
> Thrift active  : true
> Native Transport active: true
> Load   : 44.97 MB
> Generation No  : 1442913138
> Uptime (seconds)   : 5386
> Heap Memory (MB)   : 429.07 / 3972.00
> Off Heap Memory (MB)   : 0.08
> Data Center: UNKNOWN_DC
> Rack   : UNKNOWN_RACK
> Exceptions : 1
> Key Cache  : entries 642, size 58.16 KB, capacity 100 MB, 5580 
> hits, 8320 requests, 0.671 recent hit rate, 14400 save period in seconds
> Row Cache  : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> Counter Cache  : entries 0, size 0 bytes, capacity 50 MB, 0 hits, 0 
> requests, NaN recent hit rate, 7200 save period in seconds
> Token  : (invoke with -T/--tokens to see all 256 tokens)
> {code}
> Correct DCs and RACKs are returned by *nodetool status* and *nodetool 
> gossipinfo* commands:
> {code}
> # nodetool gossipinfo|grep -E 'RACK|DC'
>   DC:POZ
>   RACK:RACK30
>   DC:POZ
>   RACK:RACK30
>   DC:SJC
>   RACK:RACK68
>   DC:POZ
>   RACK:RACK30
>   DC:SJC
>   RACK:RACK62
>   DC:SJC
>   RACK:RACK62
> {code}
> {code}
> # nodetool status|grep Datacenter
> Datacenter: SJC
> Datacenter: POZ
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10382) nodetool info doesn't show the correct DC and RACK

2015-09-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906591#comment-14906591
 ] 

Stefania commented on CASSANDRA-10382:
--

Patch is +1.

Added a dtest to reproduce the issue 
[here|https://github.com/riptano/cassandra-dtest/pull/569].

Confirmed that patch fixes problem in 2.2+ but problem also exists in 2.1.


> nodetool info doesn't show the correct DC and RACK
> --
>
> Key: CASSANDRA-10382
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10382
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.2.1
> GossipingPropertyFileSnitch
>Reporter: Ruggero Marchei
>Assignee: Carl Yeksigian
>Priority: Minor
>
> When running *nodetool info* cassandra returns UNKNOWN_DC and UNKNOWN_RACK:
> {code}
> # nodetool info
> ID : b94f9ca0-f886-4111-a471-02f295573f37
> Gossip active  : true
> Thrift active  : true
> Native Transport active: true
> Load   : 44.97 MB
> Generation No  : 1442913138
> Uptime (seconds)   : 5386
> Heap Memory (MB)   : 429.07 / 3972.00
> Off Heap Memory (MB)   : 0.08
> Data Center: UNKNOWN_DC
> Rack   : UNKNOWN_RACK
> Exceptions : 1
> Key Cache  : entries 642, size 58.16 KB, capacity 100 MB, 5580 
> hits, 8320 requests, 0.671 recent hit rate, 14400 save period in seconds
> Row Cache  : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> Counter Cache  : entries 0, size 0 bytes, capacity 50 MB, 0 hits, 0 
> requests, NaN recent hit rate, 7200 save period in seconds
> Token  : (invoke with -T/--tokens to see all 256 tokens)
> {code}
> Correct DCs and RACKs are returned by *nodetool status* and *nodetool 
> gossipinfo* commands:
> {code}
> # nodetool gossipinfo|grep -E 'RACK|DC'
>   DC:POZ
>   RACK:RACK30
>   DC:POZ
>   RACK:RACK30
>   DC:SJC
>   RACK:RACK68
>   DC:POZ
>   RACK:RACK30
>   DC:SJC
>   RACK:RACK62
>   DC:SJC
>   RACK:RACK62
> {code}
> {code}
> # nodetool status|grep Datacenter
> Datacenter: SJC
> Datacenter: POZ
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10382) nodetool info doesn't show the correct DC and RACK

2015-09-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906591#comment-14906591
 ] 

Stefania edited comment on CASSANDRA-10382 at 9/24/15 4:26 PM:
---

Config files no longer required, thanks. I've added a dtest that reproduces the 
issue [here|https://github.com/riptano/cassandra-dtest/pull/569].

Patch is +1. 

I've also confirmed that the patch fixes the problem in 2.2+ but the problem 
also exists in 2.1.



was (Author: stefania):
Config files no longer required, thanks. I've added a dtest that reproduces the 
issue [here|https://github.com/riptano/cassandra-dtest/pull/569].

Patch is +1. I've also confirmed that the patch fixes the problem in 2.2+ but 
the problem also exists in 2.1.


> nodetool info doesn't show the correct DC and RACK
> --
>
> Key: CASSANDRA-10382
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10382
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.2.1
> GossipingPropertyFileSnitch
>Reporter: Ruggero Marchei
>Assignee: Carl Yeksigian
>Priority: Minor
>
> When running *nodetool info* cassandra returns UNKNOWN_DC and UNKNOWN_RACK:
> {code}
> # nodetool info
> ID : b94f9ca0-f886-4111-a471-02f295573f37
> Gossip active  : true
> Thrift active  : true
> Native Transport active: true
> Load   : 44.97 MB
> Generation No  : 1442913138
> Uptime (seconds)   : 5386
> Heap Memory (MB)   : 429.07 / 3972.00
> Off Heap Memory (MB)   : 0.08
> Data Center: UNKNOWN_DC
> Rack   : UNKNOWN_RACK
> Exceptions : 1
> Key Cache  : entries 642, size 58.16 KB, capacity 100 MB, 5580 
> hits, 8320 requests, 0.671 recent hit rate, 14400 save period in seconds
> Row Cache  : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> Counter Cache  : entries 0, size 0 bytes, capacity 50 MB, 0 hits, 0 
> requests, NaN recent hit rate, 7200 save period in seconds
> Token  : (invoke with -T/--tokens to see all 256 tokens)
> {code}
> Correct DCs and RACKs are returned by *nodetool status* and *nodetool 
> gossipinfo* commands:
> {code}
> # nodetool gossipinfo|grep -E 'RACK|DC'
>   DC:POZ
>   RACK:RACK30
>   DC:POZ
>   RACK:RACK30
>   DC:SJC
>   RACK:RACK68
>   DC:POZ
>   RACK:RACK30
>   DC:SJC
>   RACK:RACK62
>   DC:SJC
>   RACK:RACK62
> {code}
> {code}
> # nodetool status|grep Datacenter
> Datacenter: SJC
> Datacenter: POZ
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10382) nodetool info doesn't show the correct DC and RACK

2015-09-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906591#comment-14906591
 ] 

Stefania edited comment on CASSANDRA-10382 at 9/24/15 4:26 PM:
---

Config files no longer required, thanks. I've added a dtest that reproduces the 
issue [here|https://github.com/riptano/cassandra-dtest/pull/569].

Patch is +1. I've also confirmed that the patch fixes the problem in 2.2+ but 
the problem also exists in 2.1.



was (Author: stefania):
Patch is +1.

Added a dtest to reproduce the issue 
[here|https://github.com/riptano/cassandra-dtest/pull/569].

Confirmed that patch fixes problem in 2.2+ but problem also exists in 2.1.


> nodetool info doesn't show the correct DC and RACK
> --
>
> Key: CASSANDRA-10382
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10382
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.2.1
> GossipingPropertyFileSnitch
>Reporter: Ruggero Marchei
>Assignee: Carl Yeksigian
>Priority: Minor
>
> When running *nodetool info* cassandra returns UNKNOWN_DC and UNKNOWN_RACK:
> {code}
> # nodetool info
> ID : b94f9ca0-f886-4111-a471-02f295573f37
> Gossip active  : true
> Thrift active  : true
> Native Transport active: true
> Load   : 44.97 MB
> Generation No  : 1442913138
> Uptime (seconds)   : 5386
> Heap Memory (MB)   : 429.07 / 3972.00
> Off Heap Memory (MB)   : 0.08
> Data Center: UNKNOWN_DC
> Rack   : UNKNOWN_RACK
> Exceptions : 1
> Key Cache  : entries 642, size 58.16 KB, capacity 100 MB, 5580 
> hits, 8320 requests, 0.671 recent hit rate, 14400 save period in seconds
> Row Cache  : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> Counter Cache  : entries 0, size 0 bytes, capacity 50 MB, 0 hits, 0 
> requests, NaN recent hit rate, 7200 save period in seconds
> Token  : (invoke with -T/--tokens to see all 256 tokens)
> {code}
> Correct DCs and RACKs are returned by *nodetool status* and *nodetool 
> gossipinfo* commands:
> {code}
> # nodetool gossipinfo|grep -E 'RACK|DC'
>   DC:POZ
>   RACK:RACK30
>   DC:POZ
>   RACK:RACK30
>   DC:SJC
>   RACK:RACK68
>   DC:POZ
>   RACK:RACK30
>   DC:SJC
>   RACK:RACK62
>   DC:SJC
>   RACK:RACK62
> {code}
> {code}
> # nodetool status|grep Datacenter
> Datacenter: SJC
> Datacenter: POZ
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10074) cqlsh HELP SELECT_EXPR gives outdated incorrect information

2015-09-24 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10074:

Fix Version/s: (was: 3.x)
   3.0.0 rc2

> cqlsh HELP SELECT_EXPR gives outdated incorrect information
> ---
>
> Key: CASSANDRA-10074
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10074
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: 3.0.0-alpha1-SNAPSHOT
>Reporter: Jim Meyer
>Assignee: Philip Thompson
>Priority: Trivial
>  Labels: cqlsh, lhf
> Fix For: 3.0.0 rc2
>
> Attachments: 10074.txt
>
>
> Within cqlsh, the HELP SELECT_EXPR states that COUNT is the only function 
> supported by CQL.
> It is missing a description of the SUM, AVG, MIN, and MAX built in functions.
> It should probably also mention that user defined functions can be invoked 
> via SELECT.
> The outdated text is in pylib/cqlshlib/helptopics.py under def 
> help_select_expr



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10371) Decommissioned nodes can remain in gossip

2015-09-24 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-10371:
---
Assignee: Stefania

> Decommissioned nodes can remain in gossip
> -
>
> Key: CASSANDRA-10371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
>Priority: Minor
>
> This may apply to other dead states as well.  Dead states should be expired 
> after 3 days.  In the case of decom we attach a timestamp to let the other 
> nodes know when it should be expired.  It has been observed that sometimes a 
> subset of nodes in the cluster never expire the state, and through heap 
> analysis of these nodes it is revealed that the epstate.isAlive check returns 
> true when it should return false, which would allow the state to be evicted.  
> This may have been affected by CASSANDRA-8336.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10347) Bulk Loader API could not tolerate even node failure

2015-09-24 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906846#comment-14906846
 ] 

Brandon Williams commented on CASSANDRA-10347:
--

We have this in the standalone loader, so I don't see why we shouldn't have it 
here.

> Bulk Loader API could not tolerate even node failure
> 
>
> Key: CASSANDRA-10347
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10347
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Shenghua Wan
>Assignee: Paulo Motta
>
> When user uses CqlBulkOutputFormat, it tries to stream to all the nodes in 
> the token range, which includes the dead nodes. Therefore, the stream failed. 
> There was a design in C* API to allow stream() method to have a list of 
> ignore hosts, but it was not utilized.
> The empty-argument stream() method is called in all existing versions of C*, 
> i.e.
> in v2.0.11, 
> https://github.com/apache/cassandra/blob/cassandra-2.0.11/src/java/org/apache/cassandra/hadoop/AbstractBulkRecordWriter.java#L122
> in v2.1.5, 
> https://github.com/apache/cassandra/blob/cassandra-2.1.5/src/java/org/apache/cassandra/hadoop/AbstractBulkRecordWriter.java#L122
> and current trunk branch 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/hadoop/cql3/CqlBulkRecordWriter.java#L241



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10323) Add more MaterializedView metrics

2015-09-24 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906749#comment-14906749
 ] 

Philip Thompson commented on CASSANDRA-10323:
-

[~rcoli] raised the good point that a very helpful metric for operators would 
be to track MV lag, because mutations are applied to the view from the base 
table asynchronously.

> Add more MaterializedView metrics
> -
>
> Key: CASSANDRA-10323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10323
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>  Labels: lhf
> Fix For: 3.0.0 rc2
>
>
> We need to add more metrics to help understand where time is spent in 
> materialized view writes. We currently track the ratio of async base -> view 
> mutations that fail.
> We should also add
>   * The amount of time spent waiting for the partition lock (contention)
>   * The amount of time spent reading data 
> Any others? 
> [~carlyeks] [~jkni] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10382) nodetool info doesn't show the correct DC and RACK

2015-09-24 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906689#comment-14906689
 ] 

Carl Yeksigian commented on CASSANDRA-10382:


Pushed a [2.1 
branch|https://github.com/carlyeks/cassandra/tree/ticket/10382/2.1].

> nodetool info doesn't show the correct DC and RACK
> --
>
> Key: CASSANDRA-10382
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10382
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.2.1
> GossipingPropertyFileSnitch
>Reporter: Ruggero Marchei
>Assignee: Carl Yeksigian
>Priority: Minor
>
> When running *nodetool info* cassandra returns UNKNOWN_DC and UNKNOWN_RACK:
> {code}
> # nodetool info
> ID : b94f9ca0-f886-4111-a471-02f295573f37
> Gossip active  : true
> Thrift active  : true
> Native Transport active: true
> Load   : 44.97 MB
> Generation No  : 1442913138
> Uptime (seconds)   : 5386
> Heap Memory (MB)   : 429.07 / 3972.00
> Off Heap Memory (MB)   : 0.08
> Data Center: UNKNOWN_DC
> Rack   : UNKNOWN_RACK
> Exceptions : 1
> Key Cache  : entries 642, size 58.16 KB, capacity 100 MB, 5580 
> hits, 8320 requests, 0.671 recent hit rate, 14400 save period in seconds
> Row Cache  : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> Counter Cache  : entries 0, size 0 bytes, capacity 50 MB, 0 hits, 0 
> requests, NaN recent hit rate, 7200 save period in seconds
> Token  : (invoke with -T/--tokens to see all 256 tokens)
> {code}
> Correct DCs and RACKs are returned by *nodetool status* and *nodetool 
> gossipinfo* commands:
> {code}
> # nodetool gossipinfo|grep -E 'RACK|DC'
>   DC:POZ
>   RACK:RACK30
>   DC:POZ
>   RACK:RACK30
>   DC:SJC
>   RACK:RACK68
>   DC:POZ
>   RACK:RACK30
>   DC:SJC
>   RACK:RACK62
>   DC:SJC
>   RACK:RACK62
> {code}
> {code}
> # nodetool status|grep Datacenter
> Datacenter: SJC
> Datacenter: POZ
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-9494) Need to set TTL with COPY command

2015-09-24 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-9494:
---

Assignee: Stefania

> Need to set TTL with COPY command
> -
>
> Key: CASSANDRA-9494
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9494
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: API
>Reporter: Ed Chen
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x
>
>
> I can import a chunk of data into Cassandra table with COPY command like:
> COPY my_table (name, address) FROM my_file.csv WITH option='value' ... ;
> But I am not able to specify a finite TTL in COPY command with "USING TTL 
> 3600", for example. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8119) More Expressive Consistency Levels

2015-09-24 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8119:
--
Fix Version/s: 3.x

> More Expressive Consistency Levels
> --
>
> Key: CASSANDRA-8119
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8119
> Project: Cassandra
>  Issue Type: New Feature
>  Components: API
>Reporter: Tyler Hobbs
> Fix For: 3.x
>
>
> For some multi-datacenter environments, the current set of consistency levels 
> are too restrictive.  For example, the following consistency requirements 
> cannot be expressed:
> * LOCAL_QUORUM in two specific DCs
> * LOCAL_QUORUM in the local DC plus LOCAL_QUORUM in at least one other DC
> * LOCAL_QUORUM in the local DC plus N remote replicas in any DC
> I propose that we add a new consistency level: CUSTOM.  In the v4 (or v5) 
> protocol, this would be accompanied by an additional map argument.  A map of 
> {DC: CL} or a map of {DC: int} is sufficient to cover the first example.  If 
> we accept a special keys to represent "any datacenter", the second case can 
> be handled.  A similar technique could be used for "any other nodes".
> I'm not in love with the special keys, so if anybody has ideas for something 
> more elegant, feel free to propose them.  The main idea is that we want to be 
> flexible enough to cover any reasonable consistency or durability 
> requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10347) Bulk Loader API could not tolerate even node failure

2015-09-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906841#comment-14906841
 ] 

Jonathan Ellis commented on CASSANDRA-10347:


WDYT [~pkolaczk] [~brandon.williams]? Is this worth providing given the 
foot/gun potential?

> Bulk Loader API could not tolerate even node failure
> 
>
> Key: CASSANDRA-10347
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10347
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Shenghua Wan
>Assignee: Paulo Motta
>
> When user uses CqlBulkOutputFormat, it tries to stream to all the nodes in 
> the token range, which includes the dead nodes. Therefore, the stream failed. 
> There was a design in C* API to allow stream() method to have a list of 
> ignore hosts, but it was not utilized.
> The empty-argument stream() method is called in all existing versions of C*, 
> i.e.
> in v2.0.11, 
> https://github.com/apache/cassandra/blob/cassandra-2.0.11/src/java/org/apache/cassandra/hadoop/AbstractBulkRecordWriter.java#L122
> in v2.1.5, 
> https://github.com/apache/cassandra/blob/cassandra-2.1.5/src/java/org/apache/cassandra/hadoop/AbstractBulkRecordWriter.java#L122
> and current trunk branch 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/hadoop/cql3/CqlBulkRecordWriter.java#L241



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10323) Add more MaterializedView metrics

2015-09-24 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906950#comment-14906950
 ] 

Chris Lohfink commented on CASSANDRA-10323:
---

Ill give it a try

> Add more MaterializedView metrics
> -
>
> Key: CASSANDRA-10323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10323
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>  Labels: lhf
> Fix For: 3.0.0 rc2
>
>
> We need to add more metrics to help understand where time is spent in 
> materialized view writes. We currently track the ratio of async base -> view 
> mutations that fail.
> We should also add
>   * The amount of time spent waiting for the partition lock (contention)
>   * The amount of time spent reading data 
> Any others? 
> [~carlyeks] [~jkni] 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[11/12] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-09-24 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
src/java/org/apache/cassandra/service/StorageService.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5d918b8f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5d918b8f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5d918b8f

Branch: refs/heads/cassandra-3.0
Commit: 5d918b8fc0b90771caef586ce3b268c10b0d645a
Parents: 32d7616 5fe40a1
Author: Joshua McKenzie 
Authored: Thu Sep 24 14:47:18 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 14:47:18 2015 -0700

--
 src/java/org/apache/cassandra/service/StorageService.java | 10 +-
 .../org/apache/cassandra/service/StorageServiceMBean.java |  5 -
 src/java/org/apache/cassandra/tools/NodeProbe.java|  4 ++--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d918b8f/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index 0a8717f,1527eb1..76299b7
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -1571,12 -1555,7 +1571,12 @@@ public class StorageService extends Not
  return 
getTokenMetadata().getHostId(FBUtilities.getBroadcastAddress()).toString();
  }
  
 +public UUID getLocalHostUUID()
 +{
 +return 
getTokenMetadata().getHostId(FBUtilities.getBroadcastAddress());
 +}
 +
- public Map getHostIdMap()
+ public Map getEndpointToHostId()
  {
  Map mapOut = new HashMap<>();
  for (Map.Entry entry : 
getTokenMetadata().getEndpointToHostIdMapForReading().entrySet())

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d918b8f/src/java/org/apache/cassandra/service/StorageServiceMBean.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d918b8f/src/java/org/apache/cassandra/tools/NodeProbe.java
--



[02/12] cassandra git commit: Properly test for empty static row

2015-09-24 Thread jmckenzie
Properly test for empty static row

UnfilteredRowIterators.noRowsIterator() was testing the staticRow
against null to avoid allocating an empty Columns object, but we never
use null for empty static rows, we use Rows.EMPTY_STATIC_ROW so the
branch was never taken. This is a trivial fix to avoid that allocation.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32d76168
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32d76168
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32d76168

Branch: refs/heads/trunk
Commit: 32d7616846cb02b89fdecd087e6fbdf53e50fcb2
Parents: 53dc42d
Author: Sylvain Lebresne 
Authored: Thu Sep 24 10:02:53 2015 -0700
Committer: Sylvain Lebresne 
Committed: Thu Sep 24 10:08:17 2015 -0700

--
 .../org/apache/cassandra/db/rows/UnfilteredRowIterators.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/32d76168/src/java/org/apache/cassandra/db/rows/UnfilteredRowIterators.java
--
diff --git a/src/java/org/apache/cassandra/db/rows/UnfilteredRowIterators.java 
b/src/java/org/apache/cassandra/db/rows/UnfilteredRowIterators.java
index e251670..22628e2 100644
--- a/src/java/org/apache/cassandra/db/rows/UnfilteredRowIterators.java
+++ b/src/java/org/apache/cassandra/db/rows/UnfilteredRowIterators.java
@@ -99,8 +99,8 @@ public abstract class UnfilteredRowIterators
  */
 public static UnfilteredRowIterator noRowsIterator(final CFMetaData cfm, 
final DecoratedKey partitionKey, final Row staticRow, final DeletionTime 
partitionDeletion, final boolean isReverseOrder)
 {
-PartitionColumns columns = staticRow == null ? PartitionColumns.NONE
- : new 
PartitionColumns(Columns.from(staticRow.columns()), Columns.NONE);
+PartitionColumns columns = staticRow == Rows.EMPTY_STATIC_ROW ? 
PartitionColumns.NONE
+  : new 
PartitionColumns(Columns.from(staticRow.columns()), Columns.NONE);
 return new UnfilteredRowIterator()
 {
 public CFMetaData metadata()



[10/12] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-09-24 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
src/java/org/apache/cassandra/service/StorageService.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5d918b8f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5d918b8f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5d918b8f

Branch: refs/heads/trunk
Commit: 5d918b8fc0b90771caef586ce3b268c10b0d645a
Parents: 32d7616 5fe40a1
Author: Joshua McKenzie 
Authored: Thu Sep 24 14:47:18 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 14:47:18 2015 -0700

--
 src/java/org/apache/cassandra/service/StorageService.java | 10 +-
 .../org/apache/cassandra/service/StorageServiceMBean.java |  5 -
 src/java/org/apache/cassandra/tools/NodeProbe.java|  4 ++--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d918b8f/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index 0a8717f,1527eb1..76299b7
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -1571,12 -1555,7 +1571,12 @@@ public class StorageService extends Not
  return 
getTokenMetadata().getHostId(FBUtilities.getBroadcastAddress()).toString();
  }
  
 +public UUID getLocalHostUUID()
 +{
 +return 
getTokenMetadata().getHostId(FBUtilities.getBroadcastAddress());
 +}
 +
- public Map getHostIdMap()
+ public Map getEndpointToHostId()
  {
  Map mapOut = new HashMap<>();
  for (Map.Entry entry : 
getTokenMetadata().getEndpointToHostIdMapForReading().entrySet())

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d918b8f/src/java/org/apache/cassandra/service/StorageServiceMBean.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d918b8f/src/java/org/apache/cassandra/tools/NodeProbe.java
--



[08/12] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-09-24 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5fe40a11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5fe40a11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5fe40a11

Branch: refs/heads/cassandra-3.0
Commit: 5fe40a1178c2896822a51adc28c1ee6c95efc2ff
Parents: 4a849ef 6039d0e
Author: Joshua McKenzie 
Authored: Thu Sep 24 14:45:43 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 14:45:43 2015 -0700

--
 src/java/org/apache/cassandra/service/StorageService.java | 10 +-
 .../org/apache/cassandra/service/StorageServiceMBean.java |  5 -
 src/java/org/apache/cassandra/tools/NodeProbe.java|  4 ++--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5fe40a11/src/java/org/apache/cassandra/service/StorageService.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5fe40a11/src/java/org/apache/cassandra/service/StorageServiceMBean.java
--
diff --cc src/java/org/apache/cassandra/service/StorageServiceMBean.java
index 2bbc999,7e74947..6dc5a9f
--- a/src/java/org/apache/cassandra/service/StorageServiceMBean.java
+++ b/src/java/org/apache/cassandra/service/StorageServiceMBean.java
@@@ -159,8 -159,18 +159,11 @@@ public interface StorageServiceMBean ex
  public String getLocalHostId();
  
  /** Retrieve the mapping of endpoint to host ID */
- public Map getHostIdMap();
+ public Map getEndpointToHostId();
+ 
+ /** Retrieve the mapping of host ID to endpoint */
+ public Map getHostIdToEndpoint();
  
 -/**
 - * Numeric load value.
 - * @see org.apache.cassandra.metrics.StorageMetrics#load
 - */
 -@Deprecated
 -public double getLoad();
 -
  /** Human-readable load value */
  public String getLoadString();
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5fe40a11/src/java/org/apache/cassandra/tools/NodeProbe.java
--



[09/12] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-09-24 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5fe40a11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5fe40a11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5fe40a11

Branch: refs/heads/trunk
Commit: 5fe40a1178c2896822a51adc28c1ee6c95efc2ff
Parents: 4a849ef 6039d0e
Author: Joshua McKenzie 
Authored: Thu Sep 24 14:45:43 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 14:45:43 2015 -0700

--
 src/java/org/apache/cassandra/service/StorageService.java | 10 +-
 .../org/apache/cassandra/service/StorageServiceMBean.java |  5 -
 src/java/org/apache/cassandra/tools/NodeProbe.java|  4 ++--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5fe40a11/src/java/org/apache/cassandra/service/StorageService.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5fe40a11/src/java/org/apache/cassandra/service/StorageServiceMBean.java
--
diff --cc src/java/org/apache/cassandra/service/StorageServiceMBean.java
index 2bbc999,7e74947..6dc5a9f
--- a/src/java/org/apache/cassandra/service/StorageServiceMBean.java
+++ b/src/java/org/apache/cassandra/service/StorageServiceMBean.java
@@@ -159,8 -159,18 +159,11 @@@ public interface StorageServiceMBean ex
  public String getLocalHostId();
  
  /** Retrieve the mapping of endpoint to host ID */
- public Map getHostIdMap();
+ public Map getEndpointToHostId();
+ 
+ /** Retrieve the mapping of host ID to endpoint */
+ public Map getHostIdToEndpoint();
  
 -/**
 - * Numeric load value.
 - * @see org.apache.cassandra.metrics.StorageMetrics#load
 - */
 -@Deprecated
 -public double getLoad();
 -
  /** Human-readable load value */
  public String getLoadString();
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5fe40a11/src/java/org/apache/cassandra/tools/NodeProbe.java
--



[jira] [Commented] (CASSANDRA-2494) Quorum reads are not monotonically consistent

2015-09-24 Thread Aaron Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906978#comment-14906978
 ] 

Aaron Brown commented on CASSANDRA-2494:


The relevant code in the patch has changed significantly. Is the monotonic read 
consistency guarantee still provided?

> Quorum reads are not monotonically consistent
> -
>
> Key: CASSANDRA-2494
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2494
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sean Bridges
>Assignee: Jonathan Ellis
>Priority: Minor
> Fix For: 1.0.0
>
> Attachments: 2494-v2.txt, 2494.txt
>
>
> As discussed in this thread,
> http://www.mail-archive.com/user@cassandra.apache.org/msg12421.html
> Quorum reads should be consistent.  Assume we have a cluster of 3 nodes 
> (X,Y,Z) and a replication factor of 3. If a write of N is committed to X, but 
> not Y and Z, then a read from X should not return N unless the read is 
> committed to at  least two nodes.  To ensure this, a read from X should wait 
> for an ack of the read repair write from either Y or Z before returning.
> Are there system tests for cassandra?  If so, there should be a test similar 
> to the original post in the email thread.  One thread should write 1,2,3... 
> at consistency level ONE.  Another thread should read at consistency level 
> QUORUM from a random host, and verify that each read is >= the last read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8844) Change Data Capture (CDC)

2015-09-24 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907007#comment-14907007
 ] 

DOAN DuyHai edited comment on CASSANDRA-8844 at 9/24/15 8:58 PM:
-

Questions on the operational side:

1) What happens with repair (nodetool or read-repair). Does the proposed impl 
re-push the notifications to consumers ?
2) What happens with the corner case when a replica DID received a mutation but 
did not ack the coordiniator in timely manner so it will receive a hint later ? 
Notification pushed twice ?
3) What happens in case of  a new node joining accepting writes for the token 
range as well as the old node that is still accepting writes for this portion 
of token range ? Notification will be pushed to any consumer attached to the 
"joining" node ?
4) What happens with the write survey mode ? Do we push notifications in this 
case ?

I know that the Deliver-at-least-once semantics allow us to send notifications 
more than once but it's always good to clarify all those ops scenarios to have 
less surprise when the feature is deployed


was (Author: doanduyhai):
Questions on the operational side:

1) What happens with repair (nodetool or read-repair). Does the proposed impl 
re-push the notifications to consumers ?
2) What happens with the corner case when a replica DID received a mutation but 
did not ack the coordiniator in timely manner so it will receive a hint later ? 
Notification pushed twice ?
3) What happens in case of  a new node joining accepting writes for the token 
range as well as the old node that is still accepting writes for this portion 
of token range ? Notification will be pushed to any consumer attached to the 
"joining" node ?
4) What happens with the write survey ? Do we push notifications in this case ?

I know that the Deliver-at-least-once semantics allow us to send notifications 
more than once but it's always good to clarify all those ops scenarios to have 
less surprise when the feature is deployed

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to 

[05/12] cassandra git commit: Fix DC and Rack in nodetool info

2015-09-24 Thread jmckenzie
Fix DC and Rack in nodetool info

Patch by Carl Yeksigian; reviewed by stefania for CASSANDRA-10382


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6039d0ef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6039d0ef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6039d0ef

Branch: refs/heads/cassandra-2.1
Commit: 6039d0ef74a1d4ca699626302596d621ab04
Parents: 0c1432a
Author: Carl Yeksigian 
Authored: Thu Sep 24 14:43:44 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 14:43:44 2015 -0700

--
 src/java/org/apache/cassandra/service/StorageService.java | 10 +-
 .../org/apache/cassandra/service/StorageServiceMBean.java |  5 -
 src/java/org/apache/cassandra/tools/NodeProbe.java|  4 ++--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6039d0ef/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index ffe219e..8a2e71e 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1416,7 +1416,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 return 
getTokenMetadata().getHostId(FBUtilities.getBroadcastAddress()).toString();
 }
 
-public Map getHostIdMap()
+public Map getEndpointToHostId()
 {
 Map mapOut = new HashMap<>();
 for (Map.Entry entry : 
getTokenMetadata().getEndpointToHostIdMapForReading().entrySet())
@@ -1424,6 +1424,14 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 return mapOut;
 }
 
+public Map getHostIdToEndpoint()
+{
+Map mapOut = new HashMap<>();
+for (Map.Entry entry : 
getTokenMetadata().getEndpointToHostIdMapForReading().entrySet())
+mapOut.put(entry.getValue().toString(), 
entry.getKey().getHostAddress());
+return mapOut;
+}
+
 /**
  * Construct the range to endpoint mapping based on the true view
  * of the world.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6039d0ef/src/java/org/apache/cassandra/service/StorageServiceMBean.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageServiceMBean.java 
b/src/java/org/apache/cassandra/service/StorageServiceMBean.java
index 1f86d82..7e74947 100644
--- a/src/java/org/apache/cassandra/service/StorageServiceMBean.java
+++ b/src/java/org/apache/cassandra/service/StorageServiceMBean.java
@@ -159,7 +159,10 @@ public interface StorageServiceMBean extends 
NotificationEmitter
 public String getLocalHostId();
 
 /** Retrieve the mapping of endpoint to host ID */
-public Map getHostIdMap();
+public Map getEndpointToHostId();
+
+/** Retrieve the mapping of host ID to endpoint */
+public Map getHostIdToEndpoint();
 
 /**
  * Numeric load value.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6039d0ef/src/java/org/apache/cassandra/tools/NodeProbe.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeProbe.java 
b/src/java/org/apache/cassandra/tools/NodeProbe.java
index caa12c3..6f2b6fb 100644
--- a/src/java/org/apache/cassandra/tools/NodeProbe.java
+++ b/src/java/org/apache/cassandra/tools/NodeProbe.java
@@ -502,7 +502,7 @@ public class NodeProbe implements AutoCloseable
 
 public Map getHostIdMap()
 {
-return ssProxy.getHostIdMap();
+return ssProxy.getEndpointToHostId();
 }
 
 public String getLoadString()
@@ -807,7 +807,7 @@ public class NodeProbe implements AutoCloseable
 
 public String getEndpoint()
 {
-Map hostIdToEndpoint = ssProxy.getHostIdMap();
+Map hostIdToEndpoint = ssProxy.getHostIdToEndpoint();
 return hostIdToEndpoint.get(ssProxy.getLocalHostId());
 }
 



[01/12] cassandra git commit: Simplify MultiCBuilder implementation

2015-09-24 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 0c1432ac7 -> 6039d0ef7
  refs/heads/cassandra-2.2 4a849efeb -> 5fe40a117
  refs/heads/cassandra-3.0 32d761684 -> 5d918b8fc
  refs/heads/trunk 52a10696a -> 4a0d1caa2


Simplify MultiCBuilder implementation

We had 2 implementations of MultiCBuilder but one was now unused. The
patch thus simplify the implementation by getting rid of the unused
wrap method and making the whole class not abstract.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/53dc42d4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/53dc42d4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/53dc42d4

Branch: refs/heads/trunk
Commit: 53dc42d4243766f0a828dc167175c7036c1b3942
Parents: 2e87c43
Author: Sylvain Lebresne 
Authored: Thu Sep 24 09:41:57 2015 -0700
Committer: Sylvain Lebresne 
Committed: Thu Sep 24 10:08:13 2015 -0700

--
 .../org/apache/cassandra/db/MultiCBuilder.java  | 507 ---
 1 file changed, 199 insertions(+), 308 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/53dc42d4/src/java/org/apache/cassandra/db/MultiCBuilder.java
--
diff --git a/src/java/org/apache/cassandra/db/MultiCBuilder.java 
b/src/java/org/apache/cassandra/db/MultiCBuilder.java
index ab1c94d..be654fa 100644
--- a/src/java/org/apache/cassandra/db/MultiCBuilder.java
+++ b/src/java/org/apache/cassandra/db/MultiCBuilder.java
@@ -26,101 +26,64 @@ import org.apache.cassandra.utils.btree.BTreeSet;
 /**
  * Builder that allow to build multiple Clustering/Slice.Bound at the same 
time.
  */
-public abstract class MultiCBuilder
+public class MultiCBuilder
 {
 /**
- * Creates a new empty {@code MultiCBuilder}.
+ * The table comparator.
  */
-public static MultiCBuilder create(ClusteringComparator comparator)
-{
-return new ConcreteMultiCBuilder(comparator);
-}
+private final ClusteringComparator comparator;
 
 /**
- * Wraps an existing {@code CBuilder} to provide him with a MultiCBuilder 
interface
- * for the sake of passing it to {@link Restriction.appendTo}. The 
resulting
- * {@code MultiCBuilder} will still only be able to create a single 
clustering/bound
- * and an {@code IllegalArgumentException} will be thrown if elements that 
added that
- * would correspond to building multiple clusterings.
+ * The elements of the clusterings
  */
-public static MultiCBuilder wrap(final CBuilder builder)
-{
-return new MultiCBuilder()
-{
-private boolean containsNull;
-private boolean containsUnset;
-private boolean hasMissingElements;
-
-public MultiCBuilder addElementToAll(ByteBuffer value)
-{
-builder.add(value);
-
-if (value == null)
-containsNull = true;
-if (value == ByteBufferUtil.UNSET_BYTE_BUFFER)
-containsUnset = true;
-
-return this;
-}
-
-public MultiCBuilder addEachElementToAll(List values)
-{
-if (values.isEmpty())
-{
-hasMissingElements = true;
-return this;
-}
+private final List elementsList = new ArrayList<>();
 
-if (values.size() > 1)
-throw new IllegalArgumentException();
-
-return addElementToAll(values.get(0));
-}
-
-public MultiCBuilder addAllElementsToAll(List 
values)
-{
-if (values.isEmpty())
-{
-hasMissingElements = true;
-return this;
-}
-
-if (values.size() > 1)
-throw new IllegalArgumentException();
+/**
+ * The number of elements that have been added.
+ */
+private int size;
 
-return addEachElementToAll(values.get(0));
-}
+/**
+ * true if the clusterings have been build, 
false otherwise.
+ */
+private boolean built;
 
-public int remainingCount()
-{
-return builder.remainingCount();
-}
+/**
+ * true if the clusterings contains some null 
elements.
+ */
+private boolean containsNull;
 
-public boolean containsNull()
-{
-return containsNull;
-}
+/**
+ * true if the composites contains some unset 
elements.
+ */
+private boolean containsUnset;
 
-public boolean containsUnset()
-{
-   

[06/12] cassandra git commit: Fix DC and Rack in nodetool info

2015-09-24 Thread jmckenzie
Fix DC and Rack in nodetool info

Patch by Carl Yeksigian; reviewed by stefania for CASSANDRA-10382


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6039d0ef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6039d0ef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6039d0ef

Branch: refs/heads/trunk
Commit: 6039d0ef74a1d4ca699626302596d621ab04
Parents: 0c1432a
Author: Carl Yeksigian 
Authored: Thu Sep 24 14:43:44 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 14:43:44 2015 -0700

--
 src/java/org/apache/cassandra/service/StorageService.java | 10 +-
 .../org/apache/cassandra/service/StorageServiceMBean.java |  5 -
 src/java/org/apache/cassandra/tools/NodeProbe.java|  4 ++--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6039d0ef/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index ffe219e..8a2e71e 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1416,7 +1416,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 return 
getTokenMetadata().getHostId(FBUtilities.getBroadcastAddress()).toString();
 }
 
-public Map getHostIdMap()
+public Map getEndpointToHostId()
 {
 Map mapOut = new HashMap<>();
 for (Map.Entry entry : 
getTokenMetadata().getEndpointToHostIdMapForReading().entrySet())
@@ -1424,6 +1424,14 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 return mapOut;
 }
 
+public Map getHostIdToEndpoint()
+{
+Map mapOut = new HashMap<>();
+for (Map.Entry entry : 
getTokenMetadata().getEndpointToHostIdMapForReading().entrySet())
+mapOut.put(entry.getValue().toString(), 
entry.getKey().getHostAddress());
+return mapOut;
+}
+
 /**
  * Construct the range to endpoint mapping based on the true view
  * of the world.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6039d0ef/src/java/org/apache/cassandra/service/StorageServiceMBean.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageServiceMBean.java 
b/src/java/org/apache/cassandra/service/StorageServiceMBean.java
index 1f86d82..7e74947 100644
--- a/src/java/org/apache/cassandra/service/StorageServiceMBean.java
+++ b/src/java/org/apache/cassandra/service/StorageServiceMBean.java
@@ -159,7 +159,10 @@ public interface StorageServiceMBean extends 
NotificationEmitter
 public String getLocalHostId();
 
 /** Retrieve the mapping of endpoint to host ID */
-public Map getHostIdMap();
+public Map getEndpointToHostId();
+
+/** Retrieve the mapping of host ID to endpoint */
+public Map getHostIdToEndpoint();
 
 /**
  * Numeric load value.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6039d0ef/src/java/org/apache/cassandra/tools/NodeProbe.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeProbe.java 
b/src/java/org/apache/cassandra/tools/NodeProbe.java
index caa12c3..6f2b6fb 100644
--- a/src/java/org/apache/cassandra/tools/NodeProbe.java
+++ b/src/java/org/apache/cassandra/tools/NodeProbe.java
@@ -502,7 +502,7 @@ public class NodeProbe implements AutoCloseable
 
 public Map getHostIdMap()
 {
-return ssProxy.getHostIdMap();
+return ssProxy.getEndpointToHostId();
 }
 
 public String getLoadString()
@@ -807,7 +807,7 @@ public class NodeProbe implements AutoCloseable
 
 public String getEndpoint()
 {
-Map hostIdToEndpoint = ssProxy.getHostIdMap();
+Map hostIdToEndpoint = ssProxy.getHostIdToEndpoint();
 return hostIdToEndpoint.get(ssProxy.getLocalHostId());
 }
 



[07/12] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-09-24 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5fe40a11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5fe40a11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5fe40a11

Branch: refs/heads/cassandra-2.2
Commit: 5fe40a1178c2896822a51adc28c1ee6c95efc2ff
Parents: 4a849ef 6039d0e
Author: Joshua McKenzie 
Authored: Thu Sep 24 14:45:43 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 14:45:43 2015 -0700

--
 src/java/org/apache/cassandra/service/StorageService.java | 10 +-
 .../org/apache/cassandra/service/StorageServiceMBean.java |  5 -
 src/java/org/apache/cassandra/tools/NodeProbe.java|  4 ++--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5fe40a11/src/java/org/apache/cassandra/service/StorageService.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5fe40a11/src/java/org/apache/cassandra/service/StorageServiceMBean.java
--
diff --cc src/java/org/apache/cassandra/service/StorageServiceMBean.java
index 2bbc999,7e74947..6dc5a9f
--- a/src/java/org/apache/cassandra/service/StorageServiceMBean.java
+++ b/src/java/org/apache/cassandra/service/StorageServiceMBean.java
@@@ -159,8 -159,18 +159,11 @@@ public interface StorageServiceMBean ex
  public String getLocalHostId();
  
  /** Retrieve the mapping of endpoint to host ID */
- public Map getHostIdMap();
+ public Map getEndpointToHostId();
+ 
+ /** Retrieve the mapping of host ID to endpoint */
+ public Map getHostIdToEndpoint();
  
 -/**
 - * Numeric load value.
 - * @see org.apache.cassandra.metrics.StorageMetrics#load
 - */
 -@Deprecated
 -public double getLoad();
 -
  /** Human-readable load value */
  public String getLoadString();
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5fe40a11/src/java/org/apache/cassandra/tools/NodeProbe.java
--



[04/12] cassandra git commit: Fix DC and Rack in nodetool info

2015-09-24 Thread jmckenzie
Fix DC and Rack in nodetool info

Patch by Carl Yeksigian; reviewed by stefania for CASSANDRA-10382


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6039d0ef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6039d0ef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6039d0ef

Branch: refs/heads/cassandra-3.0
Commit: 6039d0ef74a1d4ca699626302596d621ab04
Parents: 0c1432a
Author: Carl Yeksigian 
Authored: Thu Sep 24 14:43:44 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 14:43:44 2015 -0700

--
 src/java/org/apache/cassandra/service/StorageService.java | 10 +-
 .../org/apache/cassandra/service/StorageServiceMBean.java |  5 -
 src/java/org/apache/cassandra/tools/NodeProbe.java|  4 ++--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6039d0ef/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index ffe219e..8a2e71e 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1416,7 +1416,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 return 
getTokenMetadata().getHostId(FBUtilities.getBroadcastAddress()).toString();
 }
 
-public Map getHostIdMap()
+public Map getEndpointToHostId()
 {
 Map mapOut = new HashMap<>();
 for (Map.Entry entry : 
getTokenMetadata().getEndpointToHostIdMapForReading().entrySet())
@@ -1424,6 +1424,14 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 return mapOut;
 }
 
+public Map getHostIdToEndpoint()
+{
+Map mapOut = new HashMap<>();
+for (Map.Entry entry : 
getTokenMetadata().getEndpointToHostIdMapForReading().entrySet())
+mapOut.put(entry.getValue().toString(), 
entry.getKey().getHostAddress());
+return mapOut;
+}
+
 /**
  * Construct the range to endpoint mapping based on the true view
  * of the world.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6039d0ef/src/java/org/apache/cassandra/service/StorageServiceMBean.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageServiceMBean.java 
b/src/java/org/apache/cassandra/service/StorageServiceMBean.java
index 1f86d82..7e74947 100644
--- a/src/java/org/apache/cassandra/service/StorageServiceMBean.java
+++ b/src/java/org/apache/cassandra/service/StorageServiceMBean.java
@@ -159,7 +159,10 @@ public interface StorageServiceMBean extends 
NotificationEmitter
 public String getLocalHostId();
 
 /** Retrieve the mapping of endpoint to host ID */
-public Map getHostIdMap();
+public Map getEndpointToHostId();
+
+/** Retrieve the mapping of host ID to endpoint */
+public Map getHostIdToEndpoint();
 
 /**
  * Numeric load value.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6039d0ef/src/java/org/apache/cassandra/tools/NodeProbe.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeProbe.java 
b/src/java/org/apache/cassandra/tools/NodeProbe.java
index caa12c3..6f2b6fb 100644
--- a/src/java/org/apache/cassandra/tools/NodeProbe.java
+++ b/src/java/org/apache/cassandra/tools/NodeProbe.java
@@ -502,7 +502,7 @@ public class NodeProbe implements AutoCloseable
 
 public Map getHostIdMap()
 {
-return ssProxy.getHostIdMap();
+return ssProxy.getEndpointToHostId();
 }
 
 public String getLoadString()
@@ -807,7 +807,7 @@ public class NodeProbe implements AutoCloseable
 
 public String getEndpoint()
 {
-Map hostIdToEndpoint = ssProxy.getHostIdMap();
+Map hostIdToEndpoint = ssProxy.getHostIdToEndpoint();
 return hostIdToEndpoint.get(ssProxy.getLocalHostId());
 }
 



[03/12] cassandra git commit: Fix DC and Rack in nodetool info

2015-09-24 Thread jmckenzie
Fix DC and Rack in nodetool info

Patch by Carl Yeksigian; reviewed by stefania for CASSANDRA-10382


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6039d0ef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6039d0ef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6039d0ef

Branch: refs/heads/cassandra-2.2
Commit: 6039d0ef74a1d4ca699626302596d621ab04
Parents: 0c1432a
Author: Carl Yeksigian 
Authored: Thu Sep 24 14:43:44 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 14:43:44 2015 -0700

--
 src/java/org/apache/cassandra/service/StorageService.java | 10 +-
 .../org/apache/cassandra/service/StorageServiceMBean.java |  5 -
 src/java/org/apache/cassandra/tools/NodeProbe.java|  4 ++--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6039d0ef/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index ffe219e..8a2e71e 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1416,7 +1416,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 return 
getTokenMetadata().getHostId(FBUtilities.getBroadcastAddress()).toString();
 }
 
-public Map getHostIdMap()
+public Map getEndpointToHostId()
 {
 Map mapOut = new HashMap<>();
 for (Map.Entry entry : 
getTokenMetadata().getEndpointToHostIdMapForReading().entrySet())
@@ -1424,6 +1424,14 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 return mapOut;
 }
 
+public Map getHostIdToEndpoint()
+{
+Map mapOut = new HashMap<>();
+for (Map.Entry entry : 
getTokenMetadata().getEndpointToHostIdMapForReading().entrySet())
+mapOut.put(entry.getValue().toString(), 
entry.getKey().getHostAddress());
+return mapOut;
+}
+
 /**
  * Construct the range to endpoint mapping based on the true view
  * of the world.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6039d0ef/src/java/org/apache/cassandra/service/StorageServiceMBean.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageServiceMBean.java 
b/src/java/org/apache/cassandra/service/StorageServiceMBean.java
index 1f86d82..7e74947 100644
--- a/src/java/org/apache/cassandra/service/StorageServiceMBean.java
+++ b/src/java/org/apache/cassandra/service/StorageServiceMBean.java
@@ -159,7 +159,10 @@ public interface StorageServiceMBean extends 
NotificationEmitter
 public String getLocalHostId();
 
 /** Retrieve the mapping of endpoint to host ID */
-public Map getHostIdMap();
+public Map getEndpointToHostId();
+
+/** Retrieve the mapping of host ID to endpoint */
+public Map getHostIdToEndpoint();
 
 /**
  * Numeric load value.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6039d0ef/src/java/org/apache/cassandra/tools/NodeProbe.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeProbe.java 
b/src/java/org/apache/cassandra/tools/NodeProbe.java
index caa12c3..6f2b6fb 100644
--- a/src/java/org/apache/cassandra/tools/NodeProbe.java
+++ b/src/java/org/apache/cassandra/tools/NodeProbe.java
@@ -502,7 +502,7 @@ public class NodeProbe implements AutoCloseable
 
 public Map getHostIdMap()
 {
-return ssProxy.getHostIdMap();
+return ssProxy.getEndpointToHostId();
 }
 
 public String getLoadString()
@@ -807,7 +807,7 @@ public class NodeProbe implements AutoCloseable
 
 public String getEndpoint()
 {
-Map hostIdToEndpoint = ssProxy.getHostIdMap();
+Map hostIdToEndpoint = ssProxy.getHostIdToEndpoint();
 return hostIdToEndpoint.get(ssProxy.getLocalHostId());
 }
 



[12/12] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-09-24 Thread jmckenzie
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4a0d1caa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4a0d1caa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4a0d1caa

Branch: refs/heads/trunk
Commit: 4a0d1caa262af3b6f2b6d329e45766b4df845a88
Parents: 52a1069 5d918b8
Author: Joshua McKenzie 
Authored: Thu Sep 24 14:47:28 2015 -0700
Committer: Joshua McKenzie 
Committed: Thu Sep 24 14:47:28 2015 -0700

--
 .../org/apache/cassandra/db/MultiCBuilder.java  | 507 ---
 .../db/rows/UnfilteredRowIterators.java |   4 +-
 .../cassandra/service/StorageService.java   |  10 +-
 .../cassandra/service/StorageServiceMBean.java  |   5 +-
 .../org/apache/cassandra/tools/NodeProbe.java   |   4 +-
 5 files changed, 216 insertions(+), 314 deletions(-)
--




[jira] [Created] (CASSANDRA-10396) Simplify row cache invalidation code

2015-09-24 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-10396:
--

 Summary: Simplify row cache invalidation code
 Key: CASSANDRA-10396
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10396
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Trivial
 Fix For: 2.2.x


CFS.maybeUpdateRowCache and CFS.invalidateRowCache are nearly identical, except 
the latter does some unnecessary extra work looking up CFID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-10392) Allow Cassandra to trace to custom tracing implementations

2015-09-24 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-10392:

Comment: was deleted

(was: patch coming soon…)

> Allow Cassandra to trace to custom tracing implementations 
> ---
>
> Key: CASSANDRA-10392
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10392
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: mck
>Assignee: mck
> Attachments: 10392-trunk.txt
>
>
> It can be possible to use an external tracing solution in Cassandra by 
> abstracting out the writing of tracing to system_traces tables in the tracing 
> package to separate implementation classes and leaving abstract classes in 
> place that define the interface and behaviour otherwise of C* tracing.
> Then via a system property "cassandra.custom_tracing_class" the Tracing class 
> implementation could be swapped out with something third party.
> An example of this is adding Zipkin tracing into Cassandra in the Summit 
> presentation.
> In addition this patch passes the custom payload through into the tracing 
> session allowing a third party tracing solution like Zipkin to do full-stack 
> tracing from clients through and into Cassandra.
> There's still a few todos and fixmes in the initial patch but i'm submitting 
> early to get feedback.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10392) Allow Cassandra to trace to custom tracing implementations

2015-09-24 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-10392:

Attachment: 10392-trunk.txt

Initial patch version. Like said, there's edges to polish here but i'm after 
community input if the general idea and direction works.

> Allow Cassandra to trace to custom tracing implementations 
> ---
>
> Key: CASSANDRA-10392
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10392
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: mck
>Assignee: mck
> Attachments: 10392-trunk.txt
>
>
> It can be possible to use an external tracing solution in Cassandra by 
> abstracting out the writing of tracing to system_traces tables in the tracing 
> package to separate implementation classes and leaving abstract classes in 
> place that define the interface and behaviour otherwise of C* tracing.
> Then via a system property "cassandra.custom_tracing_class" the Tracing class 
> implementation could be swapped out with something third party.
> An example of this is adding Zipkin tracing into Cassandra in the Summit 
> presentation.
> In addition this patch passes the custom payload through into the tracing 
> session allowing a third party tracing solution like Zipkin to do full-stack 
> tracing from clients through and into Cassandra.
> There's still a few todos and fixmes in the initial patch but i'm submitting 
> early to get feedback.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10396) Simplify row cache invalidation code

2015-09-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906948#comment-14906948
 ] 

Jonathan Ellis edited comment on CASSANDRA-10396 at 9/24/15 8:28 PM:
-

http://github.com/jbellis/cassandra/commits/10396

(note that the logic of "is rowcache enabled" is a superset of "is this a 2i" 
since 2i will never have rowcache enabled)


was (Author: jbellis):
github.com/jbellis/cassandra/commits/10396

(note that the logic of "is rowcache enabled" is a superset of "is this a 2i" 
since 2i will never have rowcache enabled)

> Simplify row cache invalidation code
> 
>
> Key: CASSANDRA-10396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10396
> Project: Cassandra
>  Issue Type: Task
>  Components: Core
>Reporter: Jonathan Ellis
>Assignee: Jonathan Ellis
>Priority: Trivial
> Fix For: 2.2.x
>
>
> CFS.maybeUpdateRowCache and CFS.invalidateRowCache are nearly identical, 
> except the latter does some unnecessary extra work looking up CFID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10396) Simplify row cache invalidation code

2015-09-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906948#comment-14906948
 ] 

Jonathan Ellis commented on CASSANDRA-10396:


github.com/jbellis/cassandra/commits/10396

(note that the logic of "is rowcache enabled" is a superset of "is this a 2i" 
since 2i will never have rowcache enabled)

> Simplify row cache invalidation code
> 
>
> Key: CASSANDRA-10396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10396
> Project: Cassandra
>  Issue Type: Task
>  Components: Core
>Reporter: Jonathan Ellis
>Assignee: Jonathan Ellis
>Priority: Trivial
> Fix For: 2.2.x
>
>
> CFS.maybeUpdateRowCache and CFS.invalidateRowCache are nearly identical, 
> except the latter does some unnecessary extra work looking up CFID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10341) Streaming does not guarantee cache invalidation

2015-09-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906954#comment-14906954
 ] 

Jonathan Ellis commented on CASSANDRA-10341:


We need to invalidate *after* adding sstables to CFS, or a read request can 
still cache stale data before the add is finished.

> Streaming does not guarantee cache invalidation
> ---
>
> Key: CASSANDRA-10341
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10341
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Paulo Motta
>
> Looking at the code, we attempt to invalidate the row cache for any rows we 
> receive via streaming, however we invalidate them immediately, before the new 
> data is available. So, if it is requested (which is likely if it is "hot") in 
> the interval, it will be re-cached and not invalidated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2015-09-24 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907007#comment-14907007
 ] 

DOAN DuyHai commented on CASSANDRA-8844:


Questions on the operational side:

1) What happens with repair (nodetool or read-repair). Does the proposed impl 
re-push the notifications to consumers ?
2) What happens with the corner case when a replica DID received a mutation but 
did not ack the coordiniator in timely manner so it will receive a hint later ? 
Notification pushed twice ?
3) What happens in case of  a new node joining accepting writes for the token 
range as well as the old node that is still accepting writes for this portion 
of token range ? Notification will be pushed to any consumer attached to the 
"joining" node ?
4) What happens with the write survey ? Do we push notifications in this case ?

I know that the Deliver-at-least-once semantics allow us to send notifications 
more than once but it's always good to clarify all those ops scenarios to have 
less surprise when the feature is deployed

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> again, reduce complexity that would otherwise need to be built into the 
> daemons.
> h2. Format and Consumption
> - Cassandra would only write to the CDC log, and never delete from it. 
> - Cleaning up consumed logfiles would be the client daemon's responibility
> - Logfile size should probably be configurable.
> - Logfiles should be named 

[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2015-09-24 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907141#comment-14907141
 ] 

Joshua McKenzie commented on CASSANDRA-8844:


bq. What happens with repair (nodetool or read-repair). Does the proposed impl 
re-push the notifications to consumers ?
We recently added a form of mutation-based-repair on the path on repair for MV 
-> it passes the mutation through to apply to view replicas without applying to 
the local CommitLog as the SSTables are swapped in to the Tracker. With minimal 
modification we can use a similar path for repair on CDC-enabled tables.

bq. What happens with the corner case when a replica DID received a mutation 
but did not ack the coordiniator in timely manner so it will receive a hint 
later ? Notification pushed twice ?
Yes. Currently the guarantee is at-least-once delivery of CDC messages. While 
we could theoretically keep a set of all seen CDC messages and ignore 
duplicates, I'd prefer we start out conservative on the resource footprint and 
evolve the feature in the future as needs arise (unless you have a strong case 
for this now).

bq. What happens in case of a new node joining accepting writes for the token 
range as well as the old node that is still accepting writes for this portion 
of token range ? Notification will be pushed to any consumer attached to the 
"joining" node ?
Any node that performs a write to the CL will also have a CDC-record for that 
write (since logically the CL record will be the CDC record).

bq. 4) What happens with the write survey mode ? Do we push notifications in 
this case ?
Great question. I don't feel strongly either way; we could either allow 
write-survey nodes to participate in CDC and thus open them to throwing UE if 
CDC logs aren't consumed or we could specifically exclude CDC participation on 
nodes in this mode. Have any strong feelings on it?

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the 

[jira] [Commented] (CASSANDRA-10141) UFPureScriptTest fails with pre-3.0 java-driver

2015-09-24 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907557#comment-14907557
 ] 

Robert Stupp commented on CASSANDRA-10141:
--

Oh - yea - that's definitely not what the method's name suggests. Corrected 
that.

> UFPureScriptTest fails with pre-3.0 java-driver
> ---
>
> Key: CASSANDRA-10141
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10141
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Robert Stupp
>  Labels: UDF
> Fix For: 3.0.0 rc2
>
>
> {noformat}
> [junit] -  ---
> [junit] Testcase: 
> testJavascriptTupleTypeCollection(org.apache.cassandra.cql3.validation.entities.UFPureScriptTest):
> Caused an ERROR
> [junit] execution of 'cql_test_keyspace_alt.function_3[tuple frozen, frozen, frozen>>]' failed: 
> java.security.AccessControlException: access denied 
> ("java.lang.RuntimePermission" "accessDeclaredMembers")
> [junit] org.apache.cassandra.exceptions.FunctionExecutionException: 
> execution of 'cql_test_keyspace_alt.function_3[tuple frozen, frozen, frozen>>]' failed: 
> java.security.AccessControlException: access denied 
> ("java.lang.RuntimePermission" "accessDeclaredMembers")
> [junit] at 
> org.apache.cassandra.exceptions.FunctionExecutionException.create(FunctionExecutionException.java:35)
> [junit] at 
> org.apache.cassandra.cql3.functions.UDFunction.execute(UDFunction.java:287)
> [junit] at 
> org.apache.cassandra.cql3.selection.ScalarFunctionSelector.getOutput(ScalarFunctionSelector.java:60)
> [junit] at 
> org.apache.cassandra.cql3.selection.Selection$SelectionWithProcessing$1.getOutputRow(Selection.java:535)
> [junit] at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.getOutputRow(Selection.java:363)
> [junit] at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.build(Selection.java:351)
> [junit] at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:599)
> [junit] at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:363)
> [junit] at 
> org.apache.cassandra.cql3.statements.SelectStatement.executeInternal(SelectStatement.java:379)
> [junit] at 
> org.apache.cassandra.cql3.statements.SelectStatement.executeInternal(SelectStatement.java:72)
> [junit] at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:337)
> [junit] at 
> org.apache.cassandra.cql3.CQLTester.execute(CQLTester.java:654)
> [junit] at 
> org.apache.cassandra.cql3.validation.entities.UFPureScriptTest.testJavascriptTupleTypeCollection(UFPureScriptTest.java:178)
> [junit] Caused by: java.security.AccessControlException: access denied 
> ("java.lang.RuntimePermission" "accessDeclaredMembers")
> [junit] at 
> java.security.AccessControlContext.checkPermission(AccessControlContext.java:457)
> [junit] at 
> java.security.AccessController.checkPermission(AccessController.java:884)
> [junit] at 
> java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
> [junit] at 
> org.apache.cassandra.cql3.functions.ThreadAwareSecurityManager.checkPermission(ThreadAwareSecurityManager.java:164)
> [junit] at java.lang.Class.checkMemberAccess(Class.java:2348)
> [junit] at java.lang.Class.getEnclosingMethod(Class.java:1037)
> [junit] at java.lang.Class.getGenericSuperclass(Class.java:777)
> [junit] at 
> com.google.common.reflect.TypeCapture.capture(TypeCapture.java:33)
> [junit] at 
> com.google.common.reflect.TypeToken.(TypeToken.java:113)
> [junit] at 
> com.datastax.driver.core.CodecUtils$4.(CodecUtils.java:44)
> [junit] at 
> com.datastax.driver.core.CodecUtils.listOf(CodecUtils.java:44)
> [junit] at 
> com.datastax.driver.core.AbstractGettableByIndexData.getList(AbstractGettableByIndexData.java:347)
> [junit] at 
> com.datastax.driver.core.TupleValue.getList(TupleValue.java:21)
> [junit] at 
> com.datastax.driver.core.AbstractGettableByIndexData.getList(AbstractGettableByIndexData.java:336)
> [junit] at 
> com.datastax.driver.core.TupleValue.getList(TupleValue.java:21)
> [junit] at 
> jdk.nashorn.internal.scripts.Script$2$\^eval\_.:program(:1)
> [junit] at 
> jdk.nashorn.internal.runtime.ScriptFunctionData.invoke(ScriptFunctionData.java:636)
> [junit] at 
> jdk.nashorn.internal.runtime.ScriptFunction.invoke(ScriptFunction.java:229)
> [junit] at 
> 

[jira] [Commented] (CASSANDRA-10141) UFPureScriptTest fails with pre-3.0 java-driver

2015-09-24 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906639#comment-14906639
 ] 

Joshua McKenzie commented on CASSANDRA-10141:
-

This looks off to me:
{code:title=mismatch}
public boolean isPackageAllowed(String pkg)
{
return allowedPackages != null && !allowedPackages.contains(pkg);
}
{code}
If allowedPackages contains the pkg it's not allowed?

> UFPureScriptTest fails with pre-3.0 java-driver
> ---
>
> Key: CASSANDRA-10141
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10141
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Robert Stupp
>  Labels: UDF
> Fix For: 3.0.0 rc2
>
>
> {noformat}
> [junit] -  ---
> [junit] Testcase: 
> testJavascriptTupleTypeCollection(org.apache.cassandra.cql3.validation.entities.UFPureScriptTest):
> Caused an ERROR
> [junit] execution of 'cql_test_keyspace_alt.function_3[tuple frozen, frozen, frozen>>]' failed: 
> java.security.AccessControlException: access denied 
> ("java.lang.RuntimePermission" "accessDeclaredMembers")
> [junit] org.apache.cassandra.exceptions.FunctionExecutionException: 
> execution of 'cql_test_keyspace_alt.function_3[tuple frozen, frozen, frozen>>]' failed: 
> java.security.AccessControlException: access denied 
> ("java.lang.RuntimePermission" "accessDeclaredMembers")
> [junit] at 
> org.apache.cassandra.exceptions.FunctionExecutionException.create(FunctionExecutionException.java:35)
> [junit] at 
> org.apache.cassandra.cql3.functions.UDFunction.execute(UDFunction.java:287)
> [junit] at 
> org.apache.cassandra.cql3.selection.ScalarFunctionSelector.getOutput(ScalarFunctionSelector.java:60)
> [junit] at 
> org.apache.cassandra.cql3.selection.Selection$SelectionWithProcessing$1.getOutputRow(Selection.java:535)
> [junit] at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.getOutputRow(Selection.java:363)
> [junit] at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.build(Selection.java:351)
> [junit] at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:599)
> [junit] at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:363)
> [junit] at 
> org.apache.cassandra.cql3.statements.SelectStatement.executeInternal(SelectStatement.java:379)
> [junit] at 
> org.apache.cassandra.cql3.statements.SelectStatement.executeInternal(SelectStatement.java:72)
> [junit] at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:337)
> [junit] at 
> org.apache.cassandra.cql3.CQLTester.execute(CQLTester.java:654)
> [junit] at 
> org.apache.cassandra.cql3.validation.entities.UFPureScriptTest.testJavascriptTupleTypeCollection(UFPureScriptTest.java:178)
> [junit] Caused by: java.security.AccessControlException: access denied 
> ("java.lang.RuntimePermission" "accessDeclaredMembers")
> [junit] at 
> java.security.AccessControlContext.checkPermission(AccessControlContext.java:457)
> [junit] at 
> java.security.AccessController.checkPermission(AccessController.java:884)
> [junit] at 
> java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
> [junit] at 
> org.apache.cassandra.cql3.functions.ThreadAwareSecurityManager.checkPermission(ThreadAwareSecurityManager.java:164)
> [junit] at java.lang.Class.checkMemberAccess(Class.java:2348)
> [junit] at java.lang.Class.getEnclosingMethod(Class.java:1037)
> [junit] at java.lang.Class.getGenericSuperclass(Class.java:777)
> [junit] at 
> com.google.common.reflect.TypeCapture.capture(TypeCapture.java:33)
> [junit] at 
> com.google.common.reflect.TypeToken.(TypeToken.java:113)
> [junit] at 
> com.datastax.driver.core.CodecUtils$4.(CodecUtils.java:44)
> [junit] at 
> com.datastax.driver.core.CodecUtils.listOf(CodecUtils.java:44)
> [junit] at 
> com.datastax.driver.core.AbstractGettableByIndexData.getList(AbstractGettableByIndexData.java:347)
> [junit] at 
> com.datastax.driver.core.TupleValue.getList(TupleValue.java:21)
> [junit] at 
> com.datastax.driver.core.AbstractGettableByIndexData.getList(AbstractGettableByIndexData.java:336)
> [junit] at 
> com.datastax.driver.core.TupleValue.getList(TupleValue.java:21)
> [junit] at 
> jdk.nashorn.internal.scripts.Script$2$\^eval\_.:program(:1)
> [junit] at 
> jdk.nashorn.internal.runtime.ScriptFunctionData.invoke(ScriptFunctionData.java:636)
> [junit] at 
> 

[jira] [Commented] (CASSANDRA-10394) Mixed case usernames do not work

2015-09-24 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906653#comment-14906653
 ] 

Sam Tunnicliffe commented on CASSANDRA-10394:
-

The fix is simple, I'd just incorrectly set the {{keepCase}} flag, so I've 
pushed patches reversing that. Incidentally, the new (in 2.2) {{CREATE ROLE}} 
syntax was behaving in the same way. Of course, that isn't technically a 
regression but I do think it's probably not what most users will be expecting 
so I've also changed that to preserve the case. That's perhaps a bit 
controversial for a minor release, so I'd be ok with omitting that part.

Patches:
|[2.2|https://github.com/beobal/cassandra/tree/10394-2.2]|[3.0|https://github.com/beobal/cassandra/tree/10394-3.0]|[trunk|https://github.com/beobal/cassandra/tree/10394-trunk]

CI Tests:
|2.2|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10394-2.2-testall/]
 | 
[dtests|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10394-2.2-dtest/]|
|3.0|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10394-3.0-testall/]
 | 
[dtests|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10394-3.0-dtest/]|
|trunk|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10394-trunk-testall/]
 | 
[dtests|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10394-trunk-dtest/]|

> Mixed case usernames do not work
> 
>
> Key: CASSANDRA-10394
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10394
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Centos 7, Cassandra 2.2.1
>Reporter: William Streaker
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.2.x
>
>
> When you create a user with a mixed case username it is stored as all lower 
> case.  When you try and login with the mixed case username it will fail, but 
> logging in as the lower case name works.   This is a change from the 2.1.x 
> versions that are released where mixed case usernames worked.
> example:
> CREATE USER stBarts WITH PASSWORD 'island';   
> The above statement changes the username to "stbarts".
> This would not be so bad except during login case does matter and has to 
> match what is stored in the system.   
> Recommended fix: allow mixed case usernames to be stored in system, or 
> convert mixed case username entered to lower case during login.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10394) Mixed case usernames do not work

2015-09-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-10394:

Reproduced In: 2.2.1, 2.2.0  (was: 2.2.0, 2.2.1)
 Reviewer: Tyler Hobbs

> Mixed case usernames do not work
> 
>
> Key: CASSANDRA-10394
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10394
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Centos 7, Cassandra 2.2.1
>Reporter: William Streaker
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.2.x
>
>
> When you create a user with a mixed case username it is stored as all lower 
> case.  When you try and login with the mixed case username it will fail, but 
> logging in as the lower case name works.   This is a change from the 2.1.x 
> versions that are released where mixed case usernames worked.
> example:
> CREATE USER stBarts WITH PASSWORD 'island';   
> The above statement changes the username to "stbarts".
> This would not be so bad except during login case does matter and has to 
> match what is stored in the system.   
> Recommended fix: allow mixed case usernames to be stored in system, or 
> convert mixed case username entered to lower case during login.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Simplify MultiCBuilder implementation

2015-09-24 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 2e87c433a -> 32d761684


Simplify MultiCBuilder implementation

We had 2 implementations of MultiCBuilder but one was now unused. The
patch thus simplify the implementation by getting rid of the unused
wrap method and making the whole class not abstract.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/53dc42d4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/53dc42d4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/53dc42d4

Branch: refs/heads/cassandra-3.0
Commit: 53dc42d4243766f0a828dc167175c7036c1b3942
Parents: 2e87c43
Author: Sylvain Lebresne 
Authored: Thu Sep 24 09:41:57 2015 -0700
Committer: Sylvain Lebresne 
Committed: Thu Sep 24 10:08:13 2015 -0700

--
 .../org/apache/cassandra/db/MultiCBuilder.java  | 507 ---
 1 file changed, 199 insertions(+), 308 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/53dc42d4/src/java/org/apache/cassandra/db/MultiCBuilder.java
--
diff --git a/src/java/org/apache/cassandra/db/MultiCBuilder.java 
b/src/java/org/apache/cassandra/db/MultiCBuilder.java
index ab1c94d..be654fa 100644
--- a/src/java/org/apache/cassandra/db/MultiCBuilder.java
+++ b/src/java/org/apache/cassandra/db/MultiCBuilder.java
@@ -26,101 +26,64 @@ import org.apache.cassandra.utils.btree.BTreeSet;
 /**
  * Builder that allow to build multiple Clustering/Slice.Bound at the same 
time.
  */
-public abstract class MultiCBuilder
+public class MultiCBuilder
 {
 /**
- * Creates a new empty {@code MultiCBuilder}.
+ * The table comparator.
  */
-public static MultiCBuilder create(ClusteringComparator comparator)
-{
-return new ConcreteMultiCBuilder(comparator);
-}
+private final ClusteringComparator comparator;
 
 /**
- * Wraps an existing {@code CBuilder} to provide him with a MultiCBuilder 
interface
- * for the sake of passing it to {@link Restriction.appendTo}. The 
resulting
- * {@code MultiCBuilder} will still only be able to create a single 
clustering/bound
- * and an {@code IllegalArgumentException} will be thrown if elements that 
added that
- * would correspond to building multiple clusterings.
+ * The elements of the clusterings
  */
-public static MultiCBuilder wrap(final CBuilder builder)
-{
-return new MultiCBuilder()
-{
-private boolean containsNull;
-private boolean containsUnset;
-private boolean hasMissingElements;
-
-public MultiCBuilder addElementToAll(ByteBuffer value)
-{
-builder.add(value);
-
-if (value == null)
-containsNull = true;
-if (value == ByteBufferUtil.UNSET_BYTE_BUFFER)
-containsUnset = true;
-
-return this;
-}
-
-public MultiCBuilder addEachElementToAll(List values)
-{
-if (values.isEmpty())
-{
-hasMissingElements = true;
-return this;
-}
+private final List elementsList = new ArrayList<>();
 
-if (values.size() > 1)
-throw new IllegalArgumentException();
-
-return addElementToAll(values.get(0));
-}
-
-public MultiCBuilder addAllElementsToAll(List 
values)
-{
-if (values.isEmpty())
-{
-hasMissingElements = true;
-return this;
-}
-
-if (values.size() > 1)
-throw new IllegalArgumentException();
+/**
+ * The number of elements that have been added.
+ */
+private int size;
 
-return addEachElementToAll(values.get(0));
-}
+/**
+ * true if the clusterings have been build, 
false otherwise.
+ */
+private boolean built;
 
-public int remainingCount()
-{
-return builder.remainingCount();
-}
+/**
+ * true if the clusterings contains some null 
elements.
+ */
+private boolean containsNull;
 
-public boolean containsNull()
-{
-return containsNull;
-}
+/**
+ * true if the composites contains some unset 
elements.
+ */
+private boolean containsUnset;
 
-public boolean containsUnset()
-{
-return containsUnset;
-}
+/**
+ * true if some empty collection have been added.
+ */
+private 

[2/2] cassandra git commit: Properly test for empty static row

2015-09-24 Thread slebresne
Properly test for empty static row

UnfilteredRowIterators.noRowsIterator() was testing the staticRow
against null to avoid allocating an empty Columns object, but we never
use null for empty static rows, we use Rows.EMPTY_STATIC_ROW so the
branch was never taken. This is a trivial fix to avoid that allocation.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32d76168
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32d76168
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32d76168

Branch: refs/heads/cassandra-3.0
Commit: 32d7616846cb02b89fdecd087e6fbdf53e50fcb2
Parents: 53dc42d
Author: Sylvain Lebresne 
Authored: Thu Sep 24 10:02:53 2015 -0700
Committer: Sylvain Lebresne 
Committed: Thu Sep 24 10:08:17 2015 -0700

--
 .../org/apache/cassandra/db/rows/UnfilteredRowIterators.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/32d76168/src/java/org/apache/cassandra/db/rows/UnfilteredRowIterators.java
--
diff --git a/src/java/org/apache/cassandra/db/rows/UnfilteredRowIterators.java 
b/src/java/org/apache/cassandra/db/rows/UnfilteredRowIterators.java
index e251670..22628e2 100644
--- a/src/java/org/apache/cassandra/db/rows/UnfilteredRowIterators.java
+++ b/src/java/org/apache/cassandra/db/rows/UnfilteredRowIterators.java
@@ -99,8 +99,8 @@ public abstract class UnfilteredRowIterators
  */
 public static UnfilteredRowIterator noRowsIterator(final CFMetaData cfm, 
final DecoratedKey partitionKey, final Row staticRow, final DeletionTime 
partitionDeletion, final boolean isReverseOrder)
 {
-PartitionColumns columns = staticRow == null ? PartitionColumns.NONE
- : new 
PartitionColumns(Columns.from(staticRow.columns()), Columns.NONE);
+PartitionColumns columns = staticRow == Rows.EMPTY_STATIC_ROW ? 
PartitionColumns.NONE
+  : new 
PartitionColumns(Columns.from(staticRow.columns()), Columns.NONE);
 return new UnfilteredRowIterator()
 {
 public CFMetaData metadata()



[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2015-09-24 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906662#comment-14906662
 ] 

Joshua McKenzie commented on CASSANDRA-8844:


A question that's come up - if we have a small % (1% for instance) of CL 
records as CDC-enabled, we end up with a large amount of wasted space in CDC 
and hit our limit much earlier due to fragmentation/wasted space.

A simple "compaction" of CL->CDC log would better facilitate long-term storage 
of CDC records. On CL discard/recycle, we can pass the CL segment over to a 
CDC-cleaning task that iterates through the file, discarding all non-CDC 
records and all records behind the timestamp of the currently consumed CDC 
offset, writing these records to a new file for future CDC consumption.

This should allow CDC to better support both the more immediate consumption 
model (CDC consumed within < 100 ms for instance) and the long-term consumption 
model (files sit on cluster for months to be collected in bulk). This would 
also open the door for us to add CDC-compression in the future as well.



> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> again, reduce complexity that would otherwise need to be built into the 
> daemons.
> h2. Format and Consumption
> - Cassandra would only write to the CDC log, and never delete from it. 
> - Cleaning up consumed logfiles would be the client daemon's responibility
> - Logfile size should probably be 

[jira] [Commented] (CASSANDRA-10394) Mixed case usernames do not work

2015-09-24 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906670#comment-14906670
 ] 

Sam Tunnicliffe commented on CASSANDRA-10394:
-

also : [dtest PR|https://github.com/riptano/cassandra-dtest/pull/570]

> Mixed case usernames do not work
> 
>
> Key: CASSANDRA-10394
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10394
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Centos 7, Cassandra 2.2.1
>Reporter: William Streaker
>Assignee: Sam Tunnicliffe
>Priority: Critical
> Fix For: 2.2.x
>
>
> When you create a user with a mixed case username it is stored as all lower 
> case.  When you try and login with the mixed case username it will fail, but 
> logging in as the lower case name works.   This is a change from the 2.1.x 
> versions that are released where mixed case usernames worked.
> example:
> CREATE USER stBarts WITH PASSWORD 'island';   
> The above statement changes the username to "stbarts".
> This would not be so bad except during login case does matter and has to 
> match what is stored in the system.   
> Recommended fix: allow mixed case usernames to be stored in system, or 
> convert mixed case username entered to lower case during login.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10395) Monitor UDFs using a single thread

2015-09-24 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-10395:


 Summary: Monitor UDFs using a single thread
 Key: CASSANDRA-10395
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10395
 Project: Cassandra
  Issue Type: Improvement
Reporter: Robert Stupp
Assignee: Robert Stupp
Priority: Minor
 Fix For: 3.0.x


Actually each UDF execution is handed over to a separate thread pool to be able 
to detect UDF timeouts. We could actually leave UDF execution in the "original" 
thread but have another thread/scheduled job regularly looking for UDF 
timeouts, which would save some time executing the UDF.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)