[jira] [Updated] (CASSANDRA-13162) Batchlog replay is throttled during bootstrap, creating conditions for incorrect query results on materialized views
[ https://issues.apache.org/jira/browse/CASSANDRA-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng updated CASSANDRA-13162: - Description: I've tested this in a C* 3.0 cluster with a couple of Materialized Views defined (one base table and two MVs on that base table). The data volume is not very high per node (about 80GB of data per node total, and that particular base table has about 25GB of data uncompressed with one MV taking 18GB compressed and the other MV taking 3GB), and the cluster is using decent hardware (EC2 C4.8XL with 18 cores + 60GB RAM + 18K IOPS RAID0 from two 3TB gp2 EBS volumes). This is originally a 9-node cluster. It appears that after adding 3 more nodes to the DC, the system.batches table accumulated a lot of data on the 3 new nodes (each having around 20GB under system.batches directory), and in the subsequent week the batchlog on the 3 new nodes got slowly replayed back to the rest of the nodes in the cluster. The bottleneck seems to be the throttling defined in this cassandra.yaml setting: batchlog_replay_throttle_in_kb, which by default is set to 1MB/s. Given that it is taking almost a week (and still hasn't finished) for the batchlog (from MV) to be replayed after the boostrap finishes, it seems only reasonable to unthrottle (or at least give it a much higher throttle rate) during the initial bootstrap, and hence I'd consider this a bug for our current MV implementation. Also as far as I understand, the bootstrap logic won't wait for the backlogged batchlog to be fully replayed before changing the new bootstrapping node to "UN" state, and if batchlog for the MVs got stuck in this state for a long time, we basically will get wrong answers on the MVs during that whole duration (until batchlog is fully played to the cluster), which adds even more criticality to this bug. was: I've tested this in a C* 3.0 cluster with a couple of Materialized Views defined (one base table and two MVs on that base table). The data volume is not very high per node (about 80GB of data per node total, and that particular base table has about 25GB of data uncompressed with one MV taking 18GB compressed and the other MV taking 3GB), and the cluster is using decent hardware (EC2 C4.8XL with 18 cores + 60GB RAM + 18K IOPS RAID0 from two 3TB gp2 EBS volumes). This is originally a 9-node cluster. It appears that after adding 3 more nodes to the DC, the system.batches table accumulated a lot of data on the 3 new nodes, and in the subsequent week the batchlog on the 3 new nodes got slowly replayed back to the rest of the nodes in the cluster. The bottleneck seems to be the throttling defined in this cassandra.yaml setting: batchlog_replay_throttle_in_kb, which by default is set to 1MB/s. Given that it is taking almost a week (and still hasn't finished) for the batchlog (from MV) to be replayed after the boostrap finishes, it seems only reasonable to unthrottle (or at least give it a much higher throttle rate) during the initial bootstrap, and hence I'd consider this a bug for our current MV implementation. Also as far as I understand, the bootstrap logic won't wait for the backlogged batchlog to be fully replayed before changing the new bootstrapping node to "UN" state, and if batchlog for the MVs got stuck in this state for a long time, we basically will get wrong answers on the MVs during that whole duration (until batchlog is fully played to the cluster), which adds even more criticality to this bug. > Batchlog replay is throttled during bootstrap, creating conditions for > incorrect query results on materialized views > > > Key: CASSANDRA-13162 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13162 > Project: Cassandra > Issue Type: Bug >Reporter: Wei Deng >Priority: Critical > Labels: bootstrap, materializedviews > > I've tested this in a C* 3.0 cluster with a couple of Materialized Views > defined (one base table and two MVs on that base table). The data volume is > not very high per node (about 80GB of data per node total, and that > particular base table has about 25GB of data uncompressed with one MV taking > 18GB compressed and the other MV taking 3GB), and the cluster is using decent > hardware (EC2 C4.8XL with 18 cores + 60GB RAM + 18K IOPS RAID0 from two 3TB > gp2 EBS volumes). > This is originally a 9-node cluster. It appears that after adding 3 more > nodes to the DC, the system.batches table accumulated a lot of data on the 3 > new nodes (each having around 20GB under system.batches directory), and in > the subsequent week the batchlog on the 3 new nodes got slowly replayed back > to the rest of the nodes in the cluster. The bottleneck seems to
cassandra git commit: fix table id comparison
Repository: cassandra Updated Branches: refs/heads/trunk 8c504a0b8 -> 03662bb36 fix table id comparison Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/03662bb3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/03662bb3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/03662bb3 Branch: refs/heads/trunk Commit: 03662bb360a3cc3ab052083285de7c12adeee2bb Parents: 8c504a0 Author: Dave BrosiusAuthored: Sat Jan 28 00:38:37 2017 -0500 Committer: Dave Brosius Committed: Sat Jan 28 00:38:37 2017 -0500 -- src/java/org/apache/cassandra/schema/Views.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/03662bb3/src/java/org/apache/cassandra/schema/Views.java -- diff --git a/src/java/org/apache/cassandra/schema/Views.java b/src/java/org/apache/cassandra/schema/Views.java index 6578b14..5765433 100644 --- a/src/java/org/apache/cassandra/schema/Views.java +++ b/src/java/org/apache/cassandra/schema/Views.java @@ -73,7 +73,7 @@ public final class Views implements Iterable public Iterable forTable(UUID tableId) { -return Iterables.filter(this, v -> v.baseTableId.equals(tableId)); +return Iterables.filter(this, v -> v.baseTableId.asUUID().equals(tableId)); } /**
cassandra git commit: Fix broken SASI diagrams from PR #90
Repository: cassandra Updated Branches: refs/heads/trunk af3fe39dc -> 8c504a0b8 Fix broken SASI diagrams from PR #90 This closes #91 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8c504a0b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8c504a0b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8c504a0b Branch: refs/heads/trunk Commit: 8c504a0b8c1a61bb72673ed42ce4ae136a4c4519 Parents: af3fe39 Author: Joaquin CasaresAuthored: Fri Jan 27 18:24:12 2017 -0600 Committer: Michael Shuler Committed: Fri Jan 27 18:50:22 2017 -0600 -- doc/SASI.md | 48 1 file changed, 24 insertions(+), 24 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c504a0b/doc/SASI.md -- diff --git a/doc/SASI.md b/doc/SASI.md index c45eb04..f5e78ca 100644 --- a/doc/SASI.md +++ b/doc/SASI.md @@ -563,17 +563,17 @@ without modification, have the following tree: âââââââââ ââââââââââ AND ââââââââ ââââââââââ â - â¼â¼ + â¼ â¼ âââââââââ ââââââââââââ âââââââ AND âââââââ âage < 100 â â âââââââââ â ââââââââââââ -â¼â¼ +â¼ â¼ ââââââââââââ âââââââââ -â fname=p* ââââ AND âââââ +â fname=p* ââââ AND âââââ âââââââââââââ âââââââââ â - â¼ â¼ +â¼ â¼ ââââââââââââ ââââââââââââ -âfname!=pa* â â age > 21 â +âfname!=pa*â â age > 21 â ââââââââââââ ââââââââââââ [`QueryPlan`](https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/sasi/plan/QueryPlan.java) @@ -586,15 +586,15 @@ following: âââââââââ ââââââââââ AND ââââââââ ââââââââââ â - â¼â¼ + â¼ â¼ âââââââââ ââââââââââââ - ââââââââââââ AND ââââââââââ âage < 100 â - â âââââââââ â ââââââââââââ - â¼ â â¼ -ââââââââââââ â ââââââââââââ -â fname=p* â â¼â age > 21 â -ââââââââââââ ââââââââââââ ââââââââââââ -âfname!=pa* â + âââââââââââââ AND âââââââââââage < 100 â + â ââââââââââ ââââââââââââ + â¼ ââ¼ +ââââââââââââ â ââââââââââââ +â fname=p* â â¼ â age > 21 â +ââââââââââââ ââââââââââââ ââââââââââââ +âfname!=pa*â ââââââââââââ When excluding results from the result set, using `!=`, the @@ -608,15 +608,15 @@ tree looks like this: âââââââââ ââââââââââ AND ââââââââ - ââââââââââ â - â¼ â¼ + ââââââââââ â + â¼ â¼
[jira] [Created] (CASSANDRA-13164) eclipse-warnings on CASSANDRA-9425
Michael Shuler created CASSANDRA-13164: -- Summary: eclipse-warnings on CASSANDRA-9425 Key: CASSANDRA-13164 URL: https://issues.apache.org/jira/browse/CASSANDRA-13164 Project: Cassandra Issue Type: Sub-task Reporter: Michael Shuler CASSANDRA-9425 commit triggered a few eclipse-warnings on http://cassci.datastax.com/job/trunk_eclipse-warnings/1181/ cc: [~iamaleksey] {noformat} 23:50:23 eclipse-warnings: 23:50:23 [mkdir] Created dir: /var/lib/jenkins/jobs/trunk_eclipse-warnings/workspace/build/ecj 23:50:23 [echo] Running Eclipse Code Analysis. Output logged to /var/lib/jenkins/jobs/trunk_eclipse-warnings/workspace/build/ecj/eclipse_compiler_checks.txt 23:50:34 [java] -- 23:50:34 [java] 1. ERROR in /var/lib/jenkins/jobs/trunk_eclipse-warnings/workspace/src/java/org/apache/cassandra/cache/OHCProvider.java (at line 138) 23:50:34 [java]throw new RuntimeException(e); 23:50:34 [java]^^ 23:50:34 [java] Potential resource leak: 'dataOutput' may not be closed at this location 23:50:34 [java] -- 23:50:34 [java] 2. ERROR in /var/lib/jenkins/jobs/trunk_eclipse-warnings/workspace/src/java/org/apache/cassandra/cache/OHCProvider.java (at line 159) 23:50:34 [java]throw new RuntimeException(e); 23:50:34 [java]^^ 23:50:34 [java] Potential resource leak: 'dataInput' may not be closed at this location 23:50:34 [java] -- 23:50:34 [java] 3. ERROR in /var/lib/jenkins/jobs/trunk_eclipse-warnings/workspace/src/java/org/apache/cassandra/cache/OHCProvider.java (at line 163) 23:50:34 [java]return new RowCacheKey(tableId, indexName, key); 23:50:34 [java] 23:50:34 [java] Potential resource leak: 'dataInput' may not be closed at this location 23:50:34 [java] -- 23:50:34 [java] 3 problems (3 errors) 23:50:34 23:50:34 BUILD FAILED {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-13156) Introduce an interface to tracing for determining whether a query should be traced
[ https://issues.apache.org/jira/browse/CASSANDRA-13156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843613#comment-15843613 ] Jeremiah Jordan commented on CASSANDRA-13156: - I think this could already be done with a custom trace implementation? > Introduce an interface to tracing for determining whether a query should be > traced > -- > > Key: CASSANDRA-13156 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13156 > Project: Cassandra > Issue Type: New Feature > Components: Observability >Reporter: Sam Overton >Assignee: Sam Overton > Labels: ops > > This is a similar idea to CASSANDRA-9193 but following the same pattern that > we have for IAuthenticator, IEndpointSnitch, ConfigurationLoader et al. where > the intention is that useful default implementations are provided, but > abstracted in such a way that custom implementations can be written for > deployments where a specific type of functionality is required. This would > then allow solutions such as CASSANDRA-11012 without any specific support > needing to be written in Cassandra. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13154) Add trace_probability as a CF level property
[ https://issues.apache.org/jira/browse/CASSANDRA-13154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate McCall updated CASSANDRA-13154: Labels: ops (was: ) > Add trace_probability as a CF level property > > > Key: CASSANDRA-13154 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13154 > Project: Cassandra > Issue Type: New Feature > Components: Observability >Reporter: Sam Overton >Assignee: Sam Overton > Labels: ops > > Include trace_probability as a CF level property, so sampled tracing can be > enabled on a per-CF basis, cluster-wide, by changing the CF property. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13155) Allow tracing at the CFS level
[ https://issues.apache.org/jira/browse/CASSANDRA-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate McCall updated CASSANDRA-13155: Labels: ops (was: ) > Allow tracing at the CFS level > -- > > Key: CASSANDRA-13155 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13155 > Project: Cassandra > Issue Type: Improvement > Components: Observability >Reporter: Sam Overton >Assignee: Sam Overton > Labels: ops > > If we have a misbehaving host, then it would be useful to enable sampled > tracing at the CFS layer on just that host so that we can investigate queries > landing on that replica, rather than just queries passing through as a > coordinator as is currently possible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13156) Introduce an interface to tracing for determining whether a query should be traced
[ https://issues.apache.org/jira/browse/CASSANDRA-13156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate McCall updated CASSANDRA-13156: Labels: ops (was: ) > Introduce an interface to tracing for determining whether a query should be > traced > -- > > Key: CASSANDRA-13156 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13156 > Project: Cassandra > Issue Type: New Feature > Components: Observability >Reporter: Sam Overton >Assignee: Sam Overton > Labels: ops > > This is a similar idea to CASSANDRA-9193 but following the same pattern that > we have for IAuthenticator, IEndpointSnitch, ConfigurationLoader et al. where > the intention is that useful default implementations are provided, but > abstracted in such a way that custom implementations can be written for > deployments where a specific type of functionality is required. This would > then allow solutions such as CASSANDRA-11012 without any specific support > needing to be written in Cassandra. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13090) Coalescing strategy sleeps too much
[ https://issues.apache.org/jira/browse/CASSANDRA-13090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13090: --- Summary: Coalescing strategy sleeps too much (was: Coalescing strategy sleep too much and should be enabled by default) > Coalescing strategy sleeps too much > --- > > Key: CASSANDRA-13090 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13090 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Corentin Chary > Fix For: 3.x > > Attachments: 0001-Fix-wait-time-coalescing-CASSANDRA-13090-2.patch, > 0001-Fix-wait-time-coalescing-CASSANDRA-13090.patch > > > With the current code maybeSleep is called even if we managed to take > maxItems out of the backlog. In this case we should really avoid sleeping > because it means that backlog is building up. > I'll send a patch shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13090) Coalescing strategy sleep too much and should be enabled by default
[ https://issues.apache.org/jira/browse/CASSANDRA-13090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13090: --- Summary: Coalescing strategy sleep too much and should be enabled by default (was: Coalescing strategy sleep) > Coalescing strategy sleep too much and should be enabled by default > --- > > Key: CASSANDRA-13090 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13090 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Corentin Chary > Fix For: 3.x > > Attachments: 0001-Fix-wait-time-coalescing-CASSANDRA-13090-2.patch, > 0001-Fix-wait-time-coalescing-CASSANDRA-13090.patch > > > With the current code maybeSleep is called even if we managed to take > maxItems out of the backlog. In this case we should really avoid sleeping > because it means that backlog is building up. > I'll send a patch shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-13090) Coalescing strategy sleep
[ https://issues.apache.org/jira/browse/CASSANDRA-13090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843602#comment-15843602 ] Ariel Weisberg commented on CASSANDRA-13090: Tests pass but I had to rebase because the merge order changed. Should be ready to commit soon. > Coalescing strategy sleep > - > > Key: CASSANDRA-13090 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13090 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Corentin Chary > Fix For: 3.x > > Attachments: 0001-Fix-wait-time-coalescing-CASSANDRA-13090-2.patch, > 0001-Fix-wait-time-coalescing-CASSANDRA-13090.patch > > > With the current code maybeSleep is called even if we managed to take > maxItems out of the backlog. In this case we should really avoid sleeping > because it means that backlog is building up. > I'll send a patch shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13119) dtest failure upgrade_tests.upgrade_supercolumns_test.TestSCUpgrade.upgrade_super_columns_through_all_versions_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13119: --- Fix Version/s: 2.2.x > dtest failure > upgrade_tests.upgrade_supercolumns_test.TestSCUpgrade.upgrade_super_columns_through_all_versions_test > --- > > Key: CASSANDRA-13119 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13119 > Project: Cassandra > Issue Type: Bug > Components: Core, Testing >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Critical > Fix For: 2.2.x, 3.0.x, 3.x, 4.x > > > The test complains about unreadable sstables version ka and lb during upgrade > which is 2.1 and 2.2. These tables look like system tables not user tables. > I looked and I can't find any place where system tables are upgraded on > upgrade. You can specify them explicitly by name with nodetool, but nodetool > defaults to only upgrading user tables and doesn't have a flag to upgrade all > tables. > These tables probably need to be removed if unused or upgraded if in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9425) Make node-local schema fully immutable
[ https://issues.apache.org/jira/browse/CASSANDRA-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-9425: - Resolution: Fixed Status: Resolved (was: Patch Available) > Make node-local schema fully immutable > -- > > Key: CASSANDRA-9425 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9425 > Project: Cassandra > Issue Type: Sub-task >Reporter: Aleksey Yeschenko >Assignee: Aleksey Yeschenko > Fix For: 4.0 > > > The way we handle schema changes currently is inherently racy. > All of our {{SchemaAlteringStatement}} s perform validation on a schema state > that's won't necessarily be there when the statement gets executed and > mutates schema. > We should make all the *Metadata classes ({{KeyspaceMetadata, > TableMetadata}}, {{ColumnMetadata}}, immutable, and local schema persistently > snapshottable, with a single top-level {{AtomicReference}} to the current > snapshot. Have DDL statements perform validation and transformation on the > same state. > In pseudo-code, think > {code} > public interface DDLStatement > { > /** > * Validates that the DDL statement can be applied to the provided schema > snapshot. > * > * @param schema snapshot of schema before executing CREATE KEYSPACE > */ > void validate(SchemaSnapshot schema); > > /** > * Applies the DDL statement to the provided schema snapshot. > * Implies that validate() has already been called on the provided > snapshot. > * > * @param schema snapshot of schema before executing the statement > * @return snapshot of schema as it would be after executing the statement > */ > SchemaSnapshot transform(SchemaSnapshot schema); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-9425) Make node-local schema fully immutable
[ https://issues.apache.org/jira/browse/CASSANDRA-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843582#comment-15843582 ] Aleksey Yeschenko edited comment on CASSANDRA-9425 at 1/27/17 10:22 PM: Committed as [af3fe39dcabd9ef77a00309ce6741268423206df|https://github.com/apache/cassandra/commit/af3fe39dcabd9ef77a00309ce6741268423206df] to trunk. Will open up a separate follow up ticket for DDL statement rework, as the comment section for this JIRA has grown quite a bit. Thanks again for the review. was (Author: iamaleksey): Committed as [af3fe39dcabd9ef77a00309ce6741268423206df|] to trunk. Will open up a separate follow up ticket for DDL statement rework, as the comment section for this JIRA has grown quite a bit. Thanks again for the review. > Make node-local schema fully immutable > -- > > Key: CASSANDRA-9425 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9425 > Project: Cassandra > Issue Type: Sub-task >Reporter: Aleksey Yeschenko >Assignee: Aleksey Yeschenko > Fix For: 4.0 > > > The way we handle schema changes currently is inherently racy. > All of our {{SchemaAlteringStatement}} s perform validation on a schema state > that's won't necessarily be there when the statement gets executed and > mutates schema. > We should make all the *Metadata classes ({{KeyspaceMetadata, > TableMetadata}}, {{ColumnMetadata}}, immutable, and local schema persistently > snapshottable, with a single top-level {{AtomicReference}} to the current > snapshot. Have DDL statements perform validation and transformation on the > same state. > In pseudo-code, think > {code} > public interface DDLStatement > { > /** > * Validates that the DDL statement can be applied to the provided schema > snapshot. > * > * @param schema snapshot of schema before executing CREATE KEYSPACE > */ > void validate(SchemaSnapshot schema); > > /** > * Applies the DDL statement to the provided schema snapshot. > * Implies that validate() has already been called on the provided > snapshot. > * > * @param schema snapshot of schema before executing the statement > * @return snapshot of schema as it would be after executing the statement > */ > SchemaSnapshot transform(SchemaSnapshot schema); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9425) Make node-local schema fully immutable
[ https://issues.apache.org/jira/browse/CASSANDRA-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843582#comment-15843582 ] Aleksey Yeschenko commented on CASSANDRA-9425: -- Committed as [af3fe39dcabd9ef77a00309ce6741268423206df|] to trunk. Will open up a separate follow up ticket for DDL statement rework, as the comment section for this JIRA has grown quite a bit. Thanks again for the review. > Make node-local schema fully immutable > -- > > Key: CASSANDRA-9425 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9425 > Project: Cassandra > Issue Type: Sub-task >Reporter: Aleksey Yeschenko >Assignee: Aleksey Yeschenko > Fix For: 4.0 > > > The way we handle schema changes currently is inherently racy. > All of our {{SchemaAlteringStatement}} s perform validation on a schema state > that's won't necessarily be there when the statement gets executed and > mutates schema. > We should make all the *Metadata classes ({{KeyspaceMetadata, > TableMetadata}}, {{ColumnMetadata}}, immutable, and local schema persistently > snapshottable, with a single top-level {{AtomicReference}} to the current > snapshot. Have DDL statements perform validation and transformation on the > same state. > In pseudo-code, think > {code} > public interface DDLStatement > { > /** > * Validates that the DDL statement can be applied to the provided schema > snapshot. > * > * @param schema snapshot of schema before executing CREATE KEYSPACE > */ > void validate(SchemaSnapshot schema); > > /** > * Applies the DDL statement to the provided schema snapshot. > * Implies that validate() has already been called on the provided > snapshot. > * > * @param schema snapshot of schema before executing the statement > * @return snapshot of schema as it would be after executing the statement > */ > SchemaSnapshot transform(SchemaSnapshot schema); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[34/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/cql3/Relation.java -- diff --git a/src/java/org/apache/cassandra/cql3/Relation.java b/src/java/org/apache/cassandra/cql3/Relation.java index 097b88e..1d27874 100644 --- a/src/java/org/apache/cassandra/cql3/Relation.java +++ b/src/java/org/apache/cassandra/cql3/Relation.java @@ -20,8 +20,8 @@ package org.apache.cassandra.cql3; import java.util.ArrayList; import java.util.List; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.schema.TableMetadata; +import org.apache.cassandra.schema.ColumnMetadata; import org.apache.cassandra.cql3.restrictions.Restriction; import org.apache.cassandra.cql3.statements.Bound; import org.apache.cassandra.exceptions.InvalidRequestException; @@ -132,31 +132,30 @@ public abstract class Relation /** * Converts this Relation into a Restriction. * - * @param cfm the Column Family meta data + * @param table the Column Family meta data * @param boundNames the variables specification where to collect the bind variables * @return the Restriction corresponding to this Relation * @throws InvalidRequestException if this Relation is not valid */ -public final Restriction toRestriction(CFMetaData cfm, - VariableSpecifications boundNames) throws InvalidRequestException +public final Restriction toRestriction(TableMetadata table, VariableSpecifications boundNames) { switch (relationType) { -case EQ: return newEQRestriction(cfm, boundNames); -case LT: return newSliceRestriction(cfm, boundNames, Bound.END, false); -case LTE: return newSliceRestriction(cfm, boundNames, Bound.END, true); -case GTE: return newSliceRestriction(cfm, boundNames, Bound.START, true); -case GT: return newSliceRestriction(cfm, boundNames, Bound.START, false); -case IN: return newINRestriction(cfm, boundNames); -case CONTAINS: return newContainsRestriction(cfm, boundNames, false); -case CONTAINS_KEY: return newContainsRestriction(cfm, boundNames, true); -case IS_NOT: return newIsNotRestriction(cfm, boundNames); +case EQ: return newEQRestriction(table, boundNames); +case LT: return newSliceRestriction(table, boundNames, Bound.END, false); +case LTE: return newSliceRestriction(table, boundNames, Bound.END, true); +case GTE: return newSliceRestriction(table, boundNames, Bound.START, true); +case GT: return newSliceRestriction(table, boundNames, Bound.START, false); +case IN: return newINRestriction(table, boundNames); +case CONTAINS: return newContainsRestriction(table, boundNames, false); +case CONTAINS_KEY: return newContainsRestriction(table, boundNames, true); +case IS_NOT: return newIsNotRestriction(table, boundNames); case LIKE_PREFIX: case LIKE_SUFFIX: case LIKE_CONTAINS: case LIKE_MATCHES: case LIKE: -return newLikeRestriction(cfm, boundNames, relationType); +return newLikeRestriction(table, boundNames, relationType); default: throw invalidRequest("Unsupported \"!=\" relation: %s", this); } } @@ -164,59 +163,52 @@ public abstract class Relation /** * Creates a new EQ restriction instance. * - * @param cfm the Column Family meta data + * @param table the table meta data * @param boundNames the variables specification where to collect the bind variables * @return a new EQ restriction instance. * @throws InvalidRequestException if the relation cannot be converted into an EQ restriction. */ -protected abstract Restriction newEQRestriction(CFMetaData cfm, -VariableSpecifications boundNames) throws InvalidRequestException; +protected abstract Restriction newEQRestriction(TableMetadata table, VariableSpecifications boundNames); /** * Creates a new IN restriction instance. * - * @param cfm the Column Family meta data + * @param table the table meta data * @param boundNames the variables specification where to collect the bind variables * @return a new IN restriction instance * @throws InvalidRequestException if the relation cannot be converted into an IN restriction. */ -protected abstract Restriction newINRestriction(CFMetaData cfm, -VariableSpecifications boundNames) throws InvalidRequestException; +protected abstract Restriction newINRestriction(TableMetadata table,
[29/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/db/ClusteringPrefix.java -- diff --git a/src/java/org/apache/cassandra/db/ClusteringPrefix.java b/src/java/org/apache/cassandra/db/ClusteringPrefix.java index 340e237..0c67b82 100644 --- a/src/java/org/apache/cassandra/db/ClusteringPrefix.java +++ b/src/java/org/apache/cassandra/db/ClusteringPrefix.java @@ -24,10 +24,12 @@ import java.util.*; import org.apache.cassandra.cache.IMeasurableMemory; import org.apache.cassandra.config.*; +import org.apache.cassandra.db.marshal.CompositeType; import org.apache.cassandra.db.rows.*; import org.apache.cassandra.db.marshal.AbstractType; import org.apache.cassandra.io.util.DataInputPlus; import org.apache.cassandra.io.util.DataOutputPlus; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.utils.ByteBufferUtil; /** @@ -235,8 +237,24 @@ public interface ClusteringPrefix extends IMeasurableMemory, Clusterable * @param metadata the metadata for the table the clustering prefix is of. * @return a human-readable string representation fo this prefix. */ -public String toString(CFMetaData metadata); +public String toString(TableMetadata metadata); +/* + * TODO: we should stop using Clustering for partition keys. Maybe we can add + * a few methods to DecoratedKey so we don't have to (note that while using a Clustering + * allows to use buildBound(), it's actually used for partition keys only when every restriction + * is an equal, so we could easily create a specific method for keys for that. + */ +default ByteBuffer serializeAsPartitionKey() +{ +if (size() == 1) +return get(0); + +ByteBuffer[] values = new ByteBuffer[size()]; +for (int i = 0; i < size(); i++) +values[i] = get(i); +return CompositeType.build(values); +} /** * The values of this prefix as an array. * http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index f712935..4aa2d3e 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -21,6 +21,8 @@ import java.io.File; import java.io.IOException; import java.io.PrintStream; import java.lang.management.ManagementFactory; +import java.lang.reflect.Constructor; +import java.lang.reflect.InvocationTargetException; import java.nio.ByteBuffer; import java.nio.file.Files; import java.util.*; @@ -65,7 +67,6 @@ import org.apache.cassandra.io.sstable.Component; import org.apache.cassandra.io.sstable.Descriptor; import org.apache.cassandra.io.sstable.SSTableMultiWriter; import org.apache.cassandra.io.sstable.format.*; -import org.apache.cassandra.io.sstable.format.big.BigFormat; import org.apache.cassandra.io.sstable.metadata.MetadataCollector; import org.apache.cassandra.io.util.FileUtils; import org.apache.cassandra.metrics.TableMetrics; @@ -142,6 +143,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean "internal"); private static final ExecutorService [] perDiskflushExecutors = new ExecutorService[DatabaseDescriptor.getAllDataFileLocations().length]; + static { for (int i = 0; i < DatabaseDescriptor.getAllDataFileLocations().length; i++) @@ -208,7 +210,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean public final Keyspace keyspace; public final String name; -public final CFMetaData metadata; +public final TableMetadataRef metadata; private final String mbeanName; @Deprecated private final String oldMBeanName; @@ -261,15 +263,15 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean // only update these runtime-modifiable settings if they have not been modified. if (!minCompactionThreshold.isModified()) for (ColumnFamilyStore cfs : concatWithIndexes()) -cfs.minCompactionThreshold = new DefaultValue(metadata.params.compaction.minCompactionThreshold()); +cfs.minCompactionThreshold = new DefaultValue(metadata().params.compaction.minCompactionThreshold()); if (!maxCompactionThreshold.isModified()) for (ColumnFamilyStore cfs : concatWithIndexes()) -cfs.maxCompactionThreshold = new DefaultValue(metadata.params.compaction.maxCompactionThreshold()); +cfs.maxCompactionThreshold = new DefaultValue(metadata().params.compaction.maxCompactionThreshold()); if
[35/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/config/Schema.java -- diff --git a/src/java/org/apache/cassandra/config/Schema.java b/src/java/org/apache/cassandra/config/Schema.java deleted file mode 100644 index c6fc2a8..000 --- a/src/java/org/apache/cassandra/config/Schema.java +++ /dev/null @@ -1,776 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.cassandra.config; - -import java.util.*; -import java.util.stream.Collectors; - -import com.google.common.collect.ImmutableList; -import com.google.common.collect.Sets; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import org.apache.cassandra.cql3.functions.*; -import org.apache.cassandra.db.ColumnFamilyStore; -import org.apache.cassandra.db.Keyspace; -import org.apache.cassandra.db.SystemKeyspace; -import org.apache.cassandra.db.commitlog.CommitLog; -import org.apache.cassandra.db.compaction.CompactionManager; -import org.apache.cassandra.db.marshal.AbstractType; -import org.apache.cassandra.db.marshal.UserType; -import org.apache.cassandra.index.Index; -import org.apache.cassandra.io.sstable.Descriptor; -import org.apache.cassandra.locator.LocalStrategy; -import org.apache.cassandra.schema.*; -import org.apache.cassandra.service.MigrationManager; -import org.apache.cassandra.utils.ConcurrentBiMap; -import org.apache.cassandra.utils.Pair; -import org.cliffc.high_scale_lib.NonBlockingHashMap; - -public class Schema -{ -private static final Logger logger = LoggerFactory.getLogger(Schema.class); - -public static final Schema instance = new Schema(); - -/* metadata map for faster keyspace lookup */ -private final Mapkeyspaces = new NonBlockingHashMap<>(); - -/* Keyspace objects, one per keyspace. Only one instance should ever exist for any given keyspace. */ -private final Map keyspaceInstances = new NonBlockingHashMap<>(); - -/* metadata map for faster ColumnFamily lookup */ -private final ConcurrentBiMap , UUID> cfIdMap = new ConcurrentBiMap<>(); - -private volatile UUID version; - -/** - * Initialize empty schema object and load the hardcoded system tables - */ -public Schema() -{ -if (DatabaseDescriptor.isDaemonInitialized() || DatabaseDescriptor.isToolInitialized()) -{ -load(SchemaKeyspace.metadata()); -load(SystemKeyspace.metadata()); -} -} - -/** - * load keyspace (keyspace) definitions, but do not initialize the keyspace instances. - * Schema version may be updated as the result. - */ -public Schema loadFromDisk() -{ -return loadFromDisk(true); -} - -/** - * Load schema definitions from disk. - * - * @param updateVersion true if schema version needs to be updated - */ -public Schema loadFromDisk(boolean updateVersion) -{ -load(SchemaKeyspace.fetchNonSystemKeyspaces()); -if (updateVersion) -updateVersion(); -return this; -} - -/** - * Load up non-system keyspaces - * - * @param keyspaceDefs The non-system keyspace definitions - * - * @return self to support chaining calls - */ -public Schema load(Iterable keyspaceDefs) -{ -keyspaceDefs.forEach(this::load); -return this; -} - -/** - * Load specific keyspace into Schema - * - * @param keyspaceDef The keyspace to load up - * - * @return self to support chaining calls - */ -public Schema load(KeyspaceMetadata keyspaceDef) -{ -keyspaceDef.tables.forEach(this::load); -keyspaceDef.views.forEach(this::load); -setKeyspaceMetadata(keyspaceDef); -return this; -} - -/** - * Get keyspace instance by name - * - * @param keyspaceName The name of the keyspace - * - * @return Keyspace object or null if keyspace was not found - */ -public Keyspace getKeyspaceInstance(String keyspaceName) -{ -return keyspaceInstances.get(keyspaceName); -} - -/** -
[23/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/db/rows/Cell.java -- diff --git a/src/java/org/apache/cassandra/db/rows/Cell.java b/src/java/org/apache/cassandra/db/rows/Cell.java index 19d1f30..4636022 100644 --- a/src/java/org/apache/cassandra/db/rows/Cell.java +++ b/src/java/org/apache/cassandra/db/rows/Cell.java @@ -25,6 +25,7 @@ import org.apache.cassandra.config.*; import org.apache.cassandra.db.*; import org.apache.cassandra.io.util.DataOutputPlus; import org.apache.cassandra.io.util.DataInputPlus; +import org.apache.cassandra.schema.ColumnMetadata; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.memory.AbstractAllocator; @@ -54,7 +55,7 @@ public abstract class Cell extends ColumnData public static final Serializer serializer = new BufferCell.Serializer(); -protected Cell(ColumnDefinition column) +protected Cell(ColumnMetadata column) { super(column); } @@ -130,7 +131,7 @@ public abstract class Cell extends ColumnData */ public abstract CellPath path(); -public abstract Cell withUpdatedColumn(ColumnDefinition newColumn); +public abstract Cell withUpdatedColumn(ColumnMetadata newColumn); public abstract Cell withUpdatedValue(ByteBuffer newValue); @@ -171,7 +172,7 @@ public abstract class Cell extends ColumnData private final static int USE_ROW_TIMESTAMP_MASK = 0x08; // Wether the cell has the same timestamp than the row this is a cell of. private final static int USE_ROW_TTL_MASK= 0x10; // Wether the cell has the same ttl than the row this is a cell of. -public void serialize(Cell cell, ColumnDefinition column, DataOutputPlus out, LivenessInfo rowLiveness, SerializationHeader header) throws IOException +public void serialize(Cell cell, ColumnMetadata column, DataOutputPlus out, LivenessInfo rowLiveness, SerializationHeader header) throws IOException { assert cell != null; boolean hasValue = cell.value().hasRemaining(); @@ -210,7 +211,7 @@ public abstract class Cell extends ColumnData header.getType(column).writeValue(cell.value(), out); } -public Cell deserialize(DataInputPlus in, LivenessInfo rowLiveness, ColumnDefinition column, SerializationHeader header, SerializationHelper helper) throws IOException +public Cell deserialize(DataInputPlus in, LivenessInfo rowLiveness, ColumnMetadata column, SerializationHeader header, SerializationHelper helper) throws IOException { int flags = in.readUnsignedByte(); boolean hasValue = (flags & HAS_EMPTY_VALUE_MASK) == 0; @@ -251,7 +252,7 @@ public abstract class Cell extends ColumnData return new BufferCell(column, timestamp, ttl, localDeletionTime, value, path); } -public long serializedSize(Cell cell, ColumnDefinition column, LivenessInfo rowLiveness, SerializationHeader header) +public long serializedSize(Cell cell, ColumnMetadata column, LivenessInfo rowLiveness, SerializationHeader header) { long size = 1; // flags boolean hasValue = cell.value().hasRemaining(); @@ -278,7 +279,7 @@ public abstract class Cell extends ColumnData } // Returns if the skipped cell was an actual cell (i.e. it had its presence flag). -public boolean skip(DataInputPlus in, ColumnDefinition column, SerializationHeader header) throws IOException +public boolean skip(DataInputPlus in, ColumnMetadata column, SerializationHeader header) throws IOException { int flags = in.readUnsignedByte(); boolean hasValue = (flags & HAS_EMPTY_VALUE_MASK) == 0; http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/db/rows/Cells.java -- diff --git a/src/java/org/apache/cassandra/db/rows/Cells.java b/src/java/org/apache/cassandra/db/rows/Cells.java index 38bde16..7f2772c 100644 --- a/src/java/org/apache/cassandra/db/rows/Cells.java +++ b/src/java/org/apache/cassandra/db/rows/Cells.java @@ -21,7 +21,7 @@ import java.nio.ByteBuffer; import java.util.Comparator; import java.util.Iterator; -import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.schema.ColumnMetadata; import org.apache.cassandra.db.Conflicts; import org.apache.cassandra.db.DeletionTime; import org.apache.cassandra.db.partitions.PartitionStatisticsCollector; @@ -197,7 +197,7 @@ public abstract class Cells * of cells from {@code existing} and {@code update} having the same cell path is empty, this * returns {@code Long.MAX_VALUE}. */ -public static long reconcileComplex(ColumnDefinition column, +public static long
[18/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/schema/SchemaKeyspace.java -- diff --git a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java index ee0974f..6716652 100644 --- a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java +++ b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java @@ -23,7 +23,6 @@ import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.*; import java.util.concurrent.TimeUnit; -import java.util.stream.Collectors; import com.google.common.collect.ImmutableList; import com.google.common.collect.MapDifference; @@ -32,7 +31,8 @@ import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.cassandra.config.*; -import org.apache.cassandra.config.ColumnDefinition.ClusteringOrder; +import org.apache.cassandra.cql3.statements.CreateTableStatement; +import org.apache.cassandra.schema.ColumnMetadata.ClusteringOrder; import org.apache.cassandra.cql3.*; import org.apache.cassandra.cql3.functions.*; import org.apache.cassandra.cql3.statements.SelectStatement; @@ -42,19 +42,17 @@ import org.apache.cassandra.db.partitions.*; import org.apache.cassandra.db.rows.*; import org.apache.cassandra.db.filter.ColumnFilter; import org.apache.cassandra.db.view.View; -import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.exceptions.InvalidRequestException; import org.apache.cassandra.transport.ProtocolVersion; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; -import org.apache.cassandra.utils.Pair; import static java.lang.String.format; import static java.util.stream.Collectors.toList; +import static java.util.stream.Collectors.toSet; import static org.apache.cassandra.cql3.QueryProcessor.executeInternal; import static org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal; -import static org.apache.cassandra.schema.CQLTypeParser.parse; /** * system_schema.* tables and methods for manipulating them. @@ -83,163 +81,167 @@ public final class SchemaKeyspace public static final List ALL = ImmutableList.of(KEYSPACES, TABLES, COLUMNS, DROPPED_COLUMNS, TRIGGERS, VIEWS, TYPES, FUNCTIONS, AGGREGATES, INDEXES); -private static final CFMetaData Keyspaces = -compile(KEYSPACES, -"keyspace definitions", -"CREATE TABLE %s (" -+ "keyspace_name text," -+ "durable_writes boolean," -+ "replication frozen
[31/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java index ecabd2f..7560e2f 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java @@ -27,21 +27,20 @@ import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.cassandra.auth.Permission; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.ColumnDefinition; -import org.apache.cassandra.config.Schema; import org.apache.cassandra.cql3.CFName; import org.apache.cassandra.cql3.ColumnIdentifier; import org.apache.cassandra.cql3.IndexName; -import org.apache.cassandra.cql3.Validation; import org.apache.cassandra.db.marshal.MapType; import org.apache.cassandra.exceptions.InvalidRequestException; import org.apache.cassandra.exceptions.RequestValidationException; import org.apache.cassandra.exceptions.UnauthorizedException; +import org.apache.cassandra.schema.ColumnMetadata; import org.apache.cassandra.schema.IndexMetadata; import org.apache.cassandra.schema.Indexes; +import org.apache.cassandra.schema.MigrationManager; +import org.apache.cassandra.schema.Schema; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.service.ClientState; -import org.apache.cassandra.service.MigrationManager; import org.apache.cassandra.service.QueryState; import org.apache.cassandra.transport.Event; @@ -75,20 +74,20 @@ public class CreateIndexStatement extends SchemaAlteringStatement public void validate(ClientState state) throws RequestValidationException { -CFMetaData cfm = Validation.validateColumnFamily(keyspace(), columnFamily()); +TableMetadata table = Schema.instance.validateTable(keyspace(), columnFamily()); -if (cfm.isCounter()) +if (table.isCounter()) throw new InvalidRequestException("Secondary indexes are not supported on counter tables"); -if (cfm.isView()) +if (table.isView()) throw new InvalidRequestException("Secondary indexes are not supported on materialized views"); -if (cfm.isCompactTable() && !cfm.isStaticCompactTable()) +if (table.isCompactTable() && !table.isStaticCompactTable()) throw new InvalidRequestException("Secondary indexes are not supported on COMPACT STORAGE tables that have clustering columns"); List targets = new ArrayList<>(rawTargets.size()); for (IndexTarget.Raw rawTarget : rawTargets) -targets.add(rawTarget.prepare(cfm)); +targets.add(rawTarget.prepare(table)); if (targets.isEmpty() && !properties.isCustom) throw new InvalidRequestException("Only CUSTOM indexes can be created without specifying a target column"); @@ -98,16 +97,16 @@ public class CreateIndexStatement extends SchemaAlteringStatement for (IndexTarget target : targets) { -ColumnDefinition cd = cfm.getColumnDefinition(target.column); +ColumnMetadata cd = table.getColumn(target.column); if (cd == null) throw new InvalidRequestException("No column definition found for column " + target.column); // TODO: we could lift that limitation -if (cfm.isCompactTable() && cd.isPrimaryKeyColumn()) +if (table.isCompactTable() && cd.isPrimaryKeyColumn()) throw new InvalidRequestException("Secondary indexes are not supported on PRIMARY KEY columns in COMPACT STORAGE tables"); -if (cd.kind == ColumnDefinition.Kind.PARTITION_KEY && cfm.getKeyValidatorAsClusteringComparator().size() == 1) +if (cd.kind == ColumnMetadata.Kind.PARTITION_KEY && table.partitionKeyColumns().size() == 1) throw new InvalidRequestException(String.format("Cannot create secondary index on partition key column %s", target.column)); boolean isMap = cd.type instanceof MapType; @@ -126,7 +125,7 @@ public class CreateIndexStatement extends SchemaAlteringStatement if (!Strings.isNullOrEmpty(indexName)) { -if (Schema.instance.getKSMetaData(keyspace()).existingIndexNames(null).contains(indexName)) +if (Schema.instance.getKeyspaceMetadata(keyspace()).existingIndexNames(null).contains(indexName)) { if (ifNotExists) return; @@ -152,7 +151,7 @@ public class CreateIndexStatement extends SchemaAlteringStatement throw new InvalidRequestException("full() indexes can only be created on frozen collections"); } -private void
[37/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
Make TableMetadata immutable, optimize Schema patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-9425 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/af3fe39d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/af3fe39d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/af3fe39d Branch: refs/heads/trunk Commit: af3fe39dcabd9ef77a00309ce6741268423206df Parents: 3580f6c Author: Aleksey YeschenkoAuthored: Thu Nov 10 01:16:59 2016 + Committer: Aleksey Yeschenko Committed: Fri Jan 27 22:17:46 2017 + -- CHANGES.txt |2 + .../apache/cassandra/triggers/AuditTrigger.java | 12 +- src/antlr/Cql.g |8 +- src/antlr/Parser.g | 60 +- .../org/apache/cassandra/auth/AuthKeyspace.java | 87 +- .../cassandra/auth/AuthMigrationListener.java | 53 - .../auth/AuthSchemaChangeListener.java | 53 + .../cassandra/auth/CassandraAuthorizer.java | 12 +- .../cassandra/auth/CassandraRoleManager.java| 12 +- .../org/apache/cassandra/auth/DataResource.java |4 +- .../apache/cassandra/auth/FunctionResource.java |2 +- .../cassandra/auth/PasswordAuthenticator.java |8 +- .../cassandra/batchlog/BatchlogManager.java |9 +- .../apache/cassandra/cache/AutoSavingCache.java | 29 +- .../org/apache/cassandra/cache/CacheKey.java| 24 +- .../apache/cassandra/cache/CounterCacheKey.java | 50 +- .../org/apache/cassandra/cache/KeyCacheKey.java | 23 +- .../org/apache/cassandra/cache/OHCProvider.java | 27 +- .../org/apache/cassandra/cache/RowCacheKey.java | 35 +- .../org/apache/cassandra/config/CFMetaData.java | 1362 -- .../cassandra/config/ColumnDefinition.java | 623 .../cassandra/config/ReadRepairDecision.java| 23 - .../org/apache/cassandra/config/Schema.java | 776 -- .../cassandra/config/SchemaConstants.java | 74 - .../apache/cassandra/config/ViewDefinition.java | 166 --- .../org/apache/cassandra/cql3/CQL3Type.java |4 +- .../apache/cassandra/cql3/ColumnIdentifier.java |1 - .../org/apache/cassandra/cql3/Constants.java| 10 +- src/java/org/apache/cassandra/cql3/Json.java| 34 +- src/java/org/apache/cassandra/cql3/Lists.java | 18 +- src/java/org/apache/cassandra/cql3/Maps.java| 12 +- .../cassandra/cql3/MultiColumnRelation.java | 62 +- .../org/apache/cassandra/cql3/Operation.java| 77 +- .../org/apache/cassandra/cql3/QueryOptions.java |4 +- .../apache/cassandra/cql3/QueryProcessor.java | 19 +- .../org/apache/cassandra/cql3/Relation.java | 65 +- src/java/org/apache/cassandra/cql3/Sets.java| 12 +- .../cassandra/cql3/SingleColumnRelation.java| 64 +- .../apache/cassandra/cql3/TokenRelation.java| 66 +- .../apache/cassandra/cql3/UntypedResultSet.java | 21 +- .../apache/cassandra/cql3/UpdateParameters.java | 28 +- .../org/apache/cassandra/cql3/UserTypes.java|8 +- .../org/apache/cassandra/cql3/Validation.java | 58 +- .../cassandra/cql3/VariableSpecifications.java | 24 +- .../cql3/conditions/AbstractConditions.java |4 +- .../cql3/conditions/ColumnCondition.java| 48 +- .../cql3/conditions/ColumnConditions.java |6 +- .../cassandra/cql3/conditions/Conditions.java |4 +- .../cql3/functions/AbstractFunction.java| 11 + .../cassandra/cql3/functions/FunctionName.java |2 +- .../cql3/functions/FunctionResolver.java|4 +- .../cassandra/cql3/functions/OperationFcts.java |2 +- .../cassandra/cql3/functions/TokenFct.java | 22 +- .../cassandra/cql3/functions/UDFunction.java| 38 +- .../ClusteringColumnRestrictions.java | 16 +- .../restrictions/CustomIndexExpression.java | 20 +- .../cql3/restrictions/IndexRestrictions.java| 14 +- .../restrictions/MultiColumnRestriction.java| 50 +- .../restrictions/PartitionKeyRestrictions.java | 10 +- .../PartitionKeySingleRestrictionSet.java | 12 +- .../cql3/restrictions/Restriction.java |8 +- .../cql3/restrictions/RestrictionSet.java | 42 +- .../restrictions/RestrictionSetWrapper.java | 10 +- .../cql3/restrictions/Restrictions.java |4 +- .../restrictions/SingleColumnRestriction.java | 34 +- .../restrictions/StatementRestrictions.java | 118 +- .../cassandra/cql3/restrictions/TermSlice.java |4 +- .../cql3/restrictions/TokenFilter.java | 20 +- .../cql3/restrictions/TokenRestriction.java | 42 +- .../selection/AbstractFunctionSelector.java |4 +- .../cql3/selection/CollectionFactory.java |4 +-
[19/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/schema/Indexes.java -- diff --git a/src/java/org/apache/cassandra/schema/Indexes.java b/src/java/org/apache/cassandra/schema/Indexes.java index eb49d39..81d400e 100644 --- a/src/java/org/apache/cassandra/schema/Indexes.java +++ b/src/java/org/apache/cassandra/schema/Indexes.java @@ -15,14 +15,16 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.cassandra.schema; import java.util.*; +import java.util.stream.Stream; import com.google.common.collect.ImmutableMap; -import org.apache.cassandra.config.Schema; +import org.apache.cassandra.exceptions.ConfigurationException; + +import static java.lang.String.format; import static com.google.common.collect.Iterables.filter; @@ -35,7 +37,7 @@ import static com.google.common.collect.Iterables.filter; * support is added for multiple target columns per-index and for indexes with * TargetType.ROW */ -public class Indexes implements Iterable +public final class Indexes implements Iterable { private final ImmutableMapindexesByName; private final ImmutableMap indexesById; @@ -56,11 +58,26 @@ public class Indexes implements Iterable return builder().build(); } +public static Indexes of(IndexMetadata... indexes) +{ +return builder().add(indexes).build(); +} + +public static Indexes of(Iterable indexes) +{ +return builder().add(indexes).build(); +} + public Iterator iterator() { return indexesByName.values().iterator(); } +public Stream stream() +{ +return indexesById.values().stream(); +} + public int size() { return indexesByName.size(); @@ -121,7 +138,7 @@ public class Indexes implements Iterable public Indexes with(IndexMetadata index) { if (get(index.name).isPresent()) -throw new IllegalStateException(String.format("Index %s already exists", index.name)); +throw new IllegalStateException(format("Index %s already exists", index.name)); return builder().add(this).add(index).build(); } @@ -131,7 +148,7 @@ public class Indexes implements Iterable */ public Indexes without(String name) { -IndexMetadata index = get(name).orElseThrow(() -> new IllegalStateException(String.format("Index %s doesn't exist", name))); +IndexMetadata index = get(name).orElseThrow(() -> new IllegalStateException(format("Index %s doesn't exist", name))); return builder().add(filter(this, v -> v != index)).build(); } @@ -149,6 +166,25 @@ public class Indexes implements Iterable return this == o || (o instanceof Indexes && indexesByName.equals(((Indexes) o).indexesByName)); } +public void validate(TableMetadata table) +{ +/* + * Index name check is duplicated in Keyspaces, for the time being. + * The reason for this is that schema altering statements are not calling + * Keyspaces.validate() as of yet. TODO: remove this once they do (on CASSANDRA-9425 completion) + */ +Set indexNames = new HashSet<>(); +for (IndexMetadata index : indexesByName.values()) +{ +if (indexNames.contains(index.name)) +throw new ConfigurationException(format("Duplicate index name %s for table %s", index.name, table)); + +indexNames.add(index.name); +} + +indexesByName.values().forEach(i -> i.validate(table)); +} + @Override public int hashCode() { @@ -164,7 +200,7 @@ public class Indexes implements Iterable public static String getAvailableIndexName(String ksName, String cfName, String indexNameRoot) { -KeyspaceMetadata ksm = Schema.instance.getKSMetaData(ksName); +KeyspaceMetadata ksm = Schema.instance.getKeyspaceMetadata(ksName); Set existingNames = ksm == null ? new HashSet<>() : ksm.existingIndexNames(null); String baseName = IndexMetadata.getDefaultIndexName(cfName, indexNameRoot); String acceptedName = baseName; @@ -196,6 +232,13 @@ public class Indexes implements Iterable return this; } +public Builder add(IndexMetadata... indexes) +{ +for (IndexMetadata index : indexes) +add(index); +return this; +} + public Builder add(Iterable indexes) { indexes.forEach(this::add); http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/schema/KeyspaceMetadata.java -- diff --git a/src/java/org/apache/cassandra/schema/KeyspaceMetadata.java
[28/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/db/DefinitionsUpdateVerbHandler.java -- diff --git a/src/java/org/apache/cassandra/db/DefinitionsUpdateVerbHandler.java b/src/java/org/apache/cassandra/db/DefinitionsUpdateVerbHandler.java deleted file mode 100644 index 8b3e121..000 --- a/src/java/org/apache/cassandra/db/DefinitionsUpdateVerbHandler.java +++ /dev/null @@ -1,55 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.cassandra.db; - -import java.util.Collection; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import org.apache.cassandra.concurrent.Stage; -import org.apache.cassandra.concurrent.StageManager; -import org.apache.cassandra.exceptions.ConfigurationException; -import org.apache.cassandra.net.IVerbHandler; -import org.apache.cassandra.net.MessageIn; -import org.apache.cassandra.schema.SchemaKeyspace; -import org.apache.cassandra.utils.WrappedRunnable; - -/** - * Called when node receives updated schema state from the schema migration coordinator node. - * Such happens when user makes local schema migration on one of the nodes in the ring - * (which is going to act as coordinator) and that node sends (pushes) it's updated schema state - * (in form of mutations) to all the alive nodes in the cluster. - */ -public class DefinitionsUpdateVerbHandler implements IVerbHandler-{ -private static final Logger logger = LoggerFactory.getLogger(DefinitionsUpdateVerbHandler.class); - -public void doVerb(final MessageIn message, int id) -{ -logger.trace("Received schema mutation push from {}", message.from); - -StageManager.getStage(Stage.MIGRATION).submit(new WrappedRunnable() -{ -public void runMayThrow() throws ConfigurationException -{ -SchemaKeyspace.mergeSchemaAndAnnounceVersion(message.payload); -} -}); -} -} \ No newline at end of file http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/db/Directories.java -- diff --git a/src/java/org/apache/cassandra/db/Directories.java b/src/java/org/apache/cassandra/db/Directories.java index 2bb4784..a3e80e5 100644 --- a/src/java/org/apache/cassandra/db/Directories.java +++ b/src/java/org/apache/cassandra/db/Directories.java @@ -28,7 +28,6 @@ import java.util.concurrent.ThreadLocalRandom; import java.util.function.BiFunction; import com.google.common.annotations.VisibleForTesting; -import com.google.common.base.Predicate; import com.google.common.collect.ImmutableMap; import com.google.common.collect.Iterables; @@ -43,8 +42,8 @@ import org.apache.cassandra.io.FSError; import org.apache.cassandra.io.FSWriteError; import org.apache.cassandra.io.util.FileUtils; import org.apache.cassandra.io.sstable.*; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.utils.DirectorySizeCalculator; -import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; import org.apache.cassandra.utils.Pair; @@ -60,13 +59,13 @@ import org.apache.cassandra.utils.Pair; * } * * Until v2.0, {@code } is just column family name. - * Since v2.1, {@code } has column family ID(cfId) added to its end. + * Since v2.1, {@code } has column family ID(tableId) added to its end. * * SSTables from secondary indexes were put in the same directory as their parent. * Since v2.2, they have their own directory under the parent directory whose name is index name. * Upon startup, those secondary index files are moved to new directory when upgrading. * - * For backward compatibility, Directories can use directory without cfId if exists. + * For backward compatibility, Directories can use directory without tableId if exists. * * In addition, more that one 'root' data directory can be specified so that * {@code } potentially represents multiple locations. @@ -174,16 +173,16 @@ public class Directories } } -private final CFMetaData
[15/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/service/paxos/PaxosState.java -- diff --git a/src/java/org/apache/cassandra/service/paxos/PaxosState.java b/src/java/org/apache/cassandra/service/paxos/PaxosState.java index ee1ba6a..cf7f3d3 100644 --- a/src/java/org/apache/cassandra/service/paxos/PaxosState.java +++ b/src/java/org/apache/cassandra/service/paxos/PaxosState.java @@ -27,7 +27,7 @@ import com.google.common.base.Throwables; import com.google.common.util.concurrent.Striped; import com.google.common.util.concurrent.Uninterruptibles; -import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.db.*; import org.apache.cassandra.tracing.Tracing; @@ -41,7 +41,7 @@ public class PaxosState private final Commit accepted; private final Commit mostRecentCommit; -public PaxosState(DecoratedKey key, CFMetaData metadata) +public PaxosState(DecoratedKey key, TableMetadata metadata) { this(Commit.emptyCommit(key, metadata), Commit.emptyCommit(key, metadata), Commit.emptyCommit(key, metadata)); } @@ -92,7 +92,7 @@ public class PaxosState } finally { - Keyspace.open(toPrepare.update.metadata().ksName).getColumnFamilyStore(toPrepare.update.metadata().cfId).metric.casPrepare.addNano(System.nanoTime() - start); + Keyspace.open(toPrepare.update.metadata().keyspace).getColumnFamilyStore(toPrepare.update.metadata().id).metric.casPrepare.addNano(System.nanoTime() - start); } } @@ -127,7 +127,7 @@ public class PaxosState } finally { - Keyspace.open(proposal.update.metadata().ksName).getColumnFamilyStore(proposal.update.metadata().cfId).metric.casPropose.addNano(System.nanoTime() - start); + Keyspace.open(proposal.update.metadata().keyspace).getColumnFamilyStore(proposal.update.metadata().id).metric.casPropose.addNano(System.nanoTime() - start); } } @@ -143,7 +143,7 @@ public class PaxosState // erase the in-progress update. // The table may have been truncated since the proposal was initiated. In that case, we // don't want to perform the mutation and potentially resurrect truncated data -if (UUIDGen.unixTimestamp(proposal.ballot) >= SystemKeyspace.getTruncatedAt(proposal.update.metadata().cfId)) +if (UUIDGen.unixTimestamp(proposal.ballot) >= SystemKeyspace.getTruncatedAt(proposal.update.metadata().id)) { Tracing.trace("Committing proposal {}", proposal); Mutation mutation = proposal.makeMutation(); @@ -158,7 +158,7 @@ public class PaxosState } finally { - Keyspace.open(proposal.update.metadata().ksName).getColumnFamilyStore(proposal.update.metadata().cfId).metric.casCommit.addNano(System.nanoTime() - start); + Keyspace.open(proposal.update.metadata().keyspace).getColumnFamilyStore(proposal.update.metadata().id).metric.casCommit.addNano(System.nanoTime() - start); } } } http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java -- diff --git a/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java b/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java index 5915eab..381c498 100644 --- a/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java +++ b/src/java/org/apache/cassandra/service/paxos/PrepareCallback.java @@ -28,12 +28,13 @@ import java.util.concurrent.ConcurrentHashMap; import com.google.common.base.Predicate; import com.google.common.collect.Iterables; + +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.db.ConsistencyLevel; import org.apache.cassandra.db.DecoratedKey; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import org.apache.cassandra.config.CFMetaData; import org.apache.cassandra.db.SystemKeyspace; import org.apache.cassandra.net.MessageIn; import org.apache.cassandra.utils.UUIDGen; @@ -49,7 +50,7 @@ public class PrepareCallback extends AbstractPaxosCallback private final MapcommitsByReplica = new ConcurrentHashMap (); -public PrepareCallback(DecoratedKey key, CFMetaData metadata, int targets, ConsistencyLevel consistency, long queryStartNanoTime) +public PrepareCallback(DecoratedKey key, TableMetadata metadata, int targets, ConsistencyLevel consistency, long queryStartNanoTime) { super(targets, consistency, queryStartNanoTime); // need to inject the right key in the empty commit so comparing with empty commits
[27/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/db/SchemaCheckVerbHandler.java -- diff --git a/src/java/org/apache/cassandra/db/SchemaCheckVerbHandler.java b/src/java/org/apache/cassandra/db/SchemaCheckVerbHandler.java deleted file mode 100644 index 4270a24..000 --- a/src/java/org/apache/cassandra/db/SchemaCheckVerbHandler.java +++ /dev/null @@ -1,42 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.cassandra.db; - -import java.util.UUID; - -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import org.apache.cassandra.config.Schema; -import org.apache.cassandra.net.IVerbHandler; -import org.apache.cassandra.net.MessageIn; -import org.apache.cassandra.net.MessageOut; -import org.apache.cassandra.net.MessagingService; -import org.apache.cassandra.utils.UUIDSerializer; - -public class SchemaCheckVerbHandler implements IVerbHandler -{ -private final Logger logger = LoggerFactory.getLogger(SchemaCheckVerbHandler.class); - -public void doVerb(MessageIn message, int id) -{ -logger.trace("Received schema check request."); -MessageOut response = new MessageOut(MessagingService.Verb.INTERNAL_RESPONSE, Schema.instance.getVersion(), UUIDSerializer.serializer); -MessagingService.instance().sendReply(response, id, message.from); -} -} http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/db/SerializationHeader.java -- diff --git a/src/java/org/apache/cassandra/db/SerializationHeader.java b/src/java/org/apache/cassandra/db/SerializationHeader.java index 729d556..1f937f8 100644 --- a/src/java/org/apache/cassandra/db/SerializationHeader.java +++ b/src/java/org/apache/cassandra/db/SerializationHeader.java @@ -21,20 +21,20 @@ import java.io.IOException; import java.nio.ByteBuffer; import java.util.*; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.ColumnDefinition; import org.apache.cassandra.db.filter.ColumnFilter; -import org.apache.cassandra.db.rows.*; import org.apache.cassandra.db.marshal.AbstractType; -import org.apache.cassandra.db.marshal.UTF8Type; import org.apache.cassandra.db.marshal.TypeParser; +import org.apache.cassandra.db.marshal.UTF8Type; +import org.apache.cassandra.db.rows.*; import org.apache.cassandra.io.sstable.format.SSTableReader; import org.apache.cassandra.io.sstable.format.Version; -import org.apache.cassandra.io.sstable.metadata.MetadataType; -import org.apache.cassandra.io.sstable.metadata.MetadataComponent; import org.apache.cassandra.io.sstable.metadata.IMetadataComponentSerializer; +import org.apache.cassandra.io.sstable.metadata.MetadataComponent; +import org.apache.cassandra.io.sstable.metadata.MetadataType; import org.apache.cassandra.io.util.DataInputPlus; import org.apache.cassandra.io.util.DataOutputPlus; +import org.apache.cassandra.schema.ColumnMetadata; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.utils.ByteBufferUtil; public class SerializationHeader @@ -46,7 +46,7 @@ public class SerializationHeader private final AbstractType keyType; private final ListclusteringTypes; -private final PartitionColumns columns; +private final RegularAndStaticColumns columns; private final EncodingStats stats; private final Map typeMap; @@ -54,7 +54,7 @@ public class SerializationHeader private SerializationHeader(boolean isForSSTable, AbstractType keyType, List clusteringTypes, -PartitionColumns columns, +RegularAndStaticColumns columns, EncodingStats stats, Map typeMap) { @@ -66,12 +66,12 @@ public class SerializationHeader this.typeMap = typeMap; } -public static SerializationHeader makeWithoutStats(CFMetaData metadata)
[20/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/io/sstable/format/SSTableWriter.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/format/SSTableWriter.java b/src/java/org/apache/cassandra/io/sstable/format/SSTableWriter.java index 874c679..26746ad 100644 --- a/src/java/org/apache/cassandra/io/sstable/format/SSTableWriter.java +++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableWriter.java @@ -24,9 +24,7 @@ import com.google.common.annotations.VisibleForTesting; import com.google.common.collect.ImmutableList; import com.google.common.collect.Sets; -import org.apache.cassandra.config.CFMetaData; import org.apache.cassandra.config.DatabaseDescriptor; -import org.apache.cassandra.config.Schema; import org.apache.cassandra.db.RowIndexEntry; import org.apache.cassandra.db.SerializationHeader; import org.apache.cassandra.db.compaction.OperationType; @@ -43,6 +41,9 @@ import org.apache.cassandra.io.sstable.metadata.MetadataComponent; import org.apache.cassandra.io.sstable.metadata.MetadataType; import org.apache.cassandra.io.sstable.metadata.StatsMetadata; import org.apache.cassandra.io.util.FileUtils; +import org.apache.cassandra.schema.Schema; +import org.apache.cassandra.schema.TableMetadata; +import org.apache.cassandra.schema.TableMetadataRef; import org.apache.cassandra.utils.concurrent.Transactional; /** @@ -75,24 +76,24 @@ public abstract class SSTableWriter extends SSTable implements Transactional protected SSTableWriter(Descriptor descriptor, long keyCount, long repairedAt, -CFMetaData metadata, +TableMetadataRef metadata, MetadataCollector metadataCollector, SerializationHeader header, Collection observers) { -super(descriptor, components(metadata), metadata, DatabaseDescriptor.getDiskOptimizationStrategy()); +super(descriptor, components(metadata.get()), metadata, DatabaseDescriptor.getDiskOptimizationStrategy()); this.keyCount = keyCount; this.repairedAt = repairedAt; this.metadataCollector = metadataCollector; -this.header = header != null ? header : SerializationHeader.makeWithoutStats(metadata); //null header indicates streaming from pre-3.0 sstable -this.rowIndexEntrySerializer = descriptor.version.getSSTableFormat().getIndexSerializer(metadata, descriptor.version, header); +this.header = header; +this.rowIndexEntrySerializer = descriptor.version.getSSTableFormat().getIndexSerializer(metadata.get(), descriptor.version, header); this.observers = observers == null ? Collections.emptySet() : observers; } public static SSTableWriter create(Descriptor descriptor, Long keyCount, Long repairedAt, - CFMetaData metadata, + TableMetadataRef metadata, MetadataCollector metadataCollector, SerializationHeader header, Collection indexes, @@ -110,11 +111,11 @@ public abstract class SSTableWriter extends SSTable implements Transactional Collection indexes, LifecycleTransaction txn) { -CFMetaData metadata = Schema.instance.getCFMetaData(descriptor); +TableMetadataRef metadata = Schema.instance.getTableMetadataRef(descriptor); return create(metadata, descriptor, keyCount, repairedAt, sstableLevel, header, indexes, txn); } -public static SSTableWriter create(CFMetaData metadata, +public static SSTableWriter create(TableMetadataRef metadata, Descriptor descriptor, long keyCount, long repairedAt, @@ -123,7 +124,7 @@ public abstract class SSTableWriter extends SSTable implements Transactional Collection indexes, LifecycleTransaction txn) { -MetadataCollector collector = new MetadataCollector(metadata.comparator).sstableLevel(sstableLevel); +MetadataCollector collector = new MetadataCollector(metadata.get().comparator).sstableLevel(sstableLevel); return create(descriptor, keyCount, repairedAt, metadata, collector, header, indexes, txn); } @@ -138,7 +139,7 @@ public abstract class SSTableWriter extends SSTable implements Transactional return create(descriptor, keyCount, repairedAt, 0, header, indexes, txn); } -
[14/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/utils/NativeSSTableLoaderClient.java -- diff --git a/src/java/org/apache/cassandra/utils/NativeSSTableLoaderClient.java b/src/java/org/apache/cassandra/utils/NativeSSTableLoaderClient.java index 0b40fcb..ba702dd 100644 --- a/src/java/org/apache/cassandra/utils/NativeSSTableLoaderClient.java +++ b/src/java/org/apache/cassandra/utils/NativeSSTableLoaderClient.java @@ -22,23 +22,20 @@ import java.util.*; import com.datastax.driver.core.*; -import org.apache.cassandra.config.ColumnDefinition; -import org.apache.cassandra.config.ColumnDefinition.ClusteringOrder; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.SchemaConstants; +import org.apache.cassandra.schema.*; +import org.apache.cassandra.schema.ColumnMetadata; +import org.apache.cassandra.schema.ColumnMetadata.ClusteringOrder; import org.apache.cassandra.cql3.ColumnIdentifier; import org.apache.cassandra.db.marshal.*; import org.apache.cassandra.dht.*; import org.apache.cassandra.dht.Token; import org.apache.cassandra.dht.Token.TokenFactory; import org.apache.cassandra.io.sstable.SSTableLoader; -import org.apache.cassandra.schema.CQLTypeParser; -import org.apache.cassandra.schema.SchemaKeyspace; -import org.apache.cassandra.schema.Types; +import org.apache.cassandra.schema.TableMetadata; public class NativeSSTableLoaderClient extends SSTableLoader.Client { -protected final Maptables; +protected final Map tables; private final Collection hosts; private final int port; private final AuthProvider authProvider; @@ -90,20 +87,20 @@ public class NativeSSTableLoaderClient extends SSTableLoader.Client Types types = fetchTypes(keyspace, session); tables.putAll(fetchTables(keyspace, session, partitioner, types)); -// We only need the CFMetaData for the views, so we only load that. +// We only need the TableMetadata for the views, so we only load that. tables.putAll(fetchViews(keyspace, session, partitioner, types)); } } -public CFMetaData getTableMetadata(String tableName) +public TableMetadataRef getTableMetadata(String tableName) { return tables.get(tableName); } @Override -public void setTableMetadata(CFMetaData cfm) +public void setTableMetadata(TableMetadataRef cfm) { -tables.put(cfm.cfName, cfm); +tables.put(cfm.name, cfm); } private static Types fetchTypes(String keyspace, Session session) @@ -130,9 +127,9 @@ public class NativeSSTableLoaderClient extends SSTableLoader.Client * Note: It is not safe for this class to use static methods from SchemaKeyspace (static final fields are ok) * as that triggers initialization of the class, which fails in client mode. */ -private static Map fetchTables(String keyspace, Session session, IPartitioner partitioner, Types types) +private static Map fetchTables(String keyspace, Session session, IPartitioner partitioner, Types types) { -Map tables = new HashMap<>(); +Map tables = new HashMap<>(); String query = String.format("SELECT * FROM %s.%s WHERE keyspace_name = ?", SchemaConstants.SCHEMA_KEYSPACE_NAME, SchemaKeyspace.TABLES); for (Row row : session.execute(query, keyspace)) @@ -144,12 +141,9 @@ public class NativeSSTableLoaderClient extends SSTableLoader.Client return tables; } -/* - * In the case where we are creating View CFMetaDatas, we - */ -private static Map fetchViews(String keyspace, Session session, IPartitioner partitioner, Types types) +private static Map fetchViews(String keyspace, Session session, IPartitioner partitioner, Types types) { -Map tables = new HashMap<>(); +Map tables = new HashMap<>(); String query = String.format("SELECT * FROM %s.%s WHERE keyspace_name = ?", SchemaConstants.SCHEMA_KEYSPACE_NAME, SchemaKeyspace.VIEWS); for (Row row : session.execute(query, keyspace)) @@ -161,43 +155,31 @@ public class NativeSSTableLoaderClient extends SSTableLoader.Client return tables; } -private static CFMetaData createTableMetadata(String keyspace, - Session session, - IPartitioner partitioner, - boolean isView, - Row row, - String name, -
[33/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/cql3/restrictions/PartitionKeyRestrictions.java -- diff --git a/src/java/org/apache/cassandra/cql3/restrictions/PartitionKeyRestrictions.java b/src/java/org/apache/cassandra/cql3/restrictions/PartitionKeyRestrictions.java index 1ff45d0..b1edf94 100644 --- a/src/java/org/apache/cassandra/cql3/restrictions/PartitionKeyRestrictions.java +++ b/src/java/org/apache/cassandra/cql3/restrictions/PartitionKeyRestrictions.java @@ -20,7 +20,7 @@ package org.apache.cassandra.cql3.restrictions; import java.nio.ByteBuffer; import java.util.List; -import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.cql3.QueryOptions; import org.apache.cassandra.cql3.statements.Bound; @@ -53,16 +53,16 @@ interface PartitionKeyRestrictions extends Restrictions /** * checks if specified restrictions require filtering * - * @param cfm column family metadata + * @param table column family metadata * @return true if filtering is required, false otherwise */ -public boolean needFiltering(CFMetaData cfm); +public boolean needFiltering(TableMetadata table); /** * Checks if the partition key has unrestricted components. * - * @param cfm column family metadata + * @param table column family metadata * @return true if the partition key has unrestricted components, false otherwise. */ -public boolean hasUnrestrictedPartitionKeyComponents(CFMetaData cfm); +public boolean hasUnrestrictedPartitionKeyComponents(TableMetadata table); } http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/cql3/restrictions/PartitionKeySingleRestrictionSet.java -- diff --git a/src/java/org/apache/cassandra/cql3/restrictions/PartitionKeySingleRestrictionSet.java b/src/java/org/apache/cassandra/cql3/restrictions/PartitionKeySingleRestrictionSet.java index b34ff54..5113667 100644 --- a/src/java/org/apache/cassandra/cql3/restrictions/PartitionKeySingleRestrictionSet.java +++ b/src/java/org/apache/cassandra/cql3/restrictions/PartitionKeySingleRestrictionSet.java @@ -20,7 +20,7 @@ package org.apache.cassandra.cql3.restrictions; import java.nio.ByteBuffer; import java.util.*; -import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.cql3.QueryOptions; import org.apache.cassandra.cql3.statements.Bound; import org.apache.cassandra.db.ClusteringComparator; @@ -59,7 +59,7 @@ final class PartitionKeySingleRestrictionSet extends RestrictionSetWrapper imple { List l = new ArrayList<>(clusterings.size()); for (ClusteringPrefix clustering : clusterings) -l.add(CFMetaData.serializePartitionKey(clustering)); +l.add(clustering.serializeAsPartitionKey()); return l; } @@ -131,18 +131,18 @@ final class PartitionKeySingleRestrictionSet extends RestrictionSetWrapper imple } @Override -public boolean needFiltering(CFMetaData cfm) +public boolean needFiltering(TableMetadata table) { if (isEmpty()) return false; // slice or has unrestricted key component -return hasUnrestrictedPartitionKeyComponents(cfm) || hasSlice(); +return hasUnrestrictedPartitionKeyComponents(table) || hasSlice(); } @Override -public boolean hasUnrestrictedPartitionKeyComponents(CFMetaData cfm) +public boolean hasUnrestrictedPartitionKeyComponents(TableMetadata table) { -return size() < cfm.partitionKeyColumns().size(); +return size() < table.partitionKeyColumns().size(); } @Override http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/cql3/restrictions/Restriction.java -- diff --git a/src/java/org/apache/cassandra/cql3/restrictions/Restriction.java b/src/java/org/apache/cassandra/cql3/restrictions/Restriction.java index fc7f5bc..daace46 100644 --- a/src/java/org/apache/cassandra/cql3/restrictions/Restriction.java +++ b/src/java/org/apache/cassandra/cql3/restrictions/Restriction.java @@ -19,7 +19,7 @@ package org.apache.cassandra.cql3.restrictions; import java.util.List; -import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.schema.ColumnMetadata; import org.apache.cassandra.cql3.QueryOptions; import org.apache.cassandra.cql3.functions.Function; import org.apache.cassandra.db.filter.RowFilter; @@ -39,19 +39,19 @@ public interface Restriction * Returns the definition of the first column. * @return the definition of the first column. */ -public
[24/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/db/filter/RowFilter.java -- diff --git a/src/java/org/apache/cassandra/db/filter/RowFilter.java b/src/java/org/apache/cassandra/db/filter/RowFilter.java index 14903ba..bf65e96 100644 --- a/src/java/org/apache/cassandra/db/filter/RowFilter.java +++ b/src/java/org/apache/cassandra/db/filter/RowFilter.java @@ -25,25 +25,22 @@ import java.util.concurrent.ConcurrentMap; import java.util.concurrent.atomic.AtomicInteger; import com.google.common.base.Objects; -import com.google.common.collect.Iterables; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.ColumnDefinition; import org.apache.cassandra.cql3.Operator; import org.apache.cassandra.db.*; import org.apache.cassandra.db.context.*; import org.apache.cassandra.db.marshal.*; -import org.apache.cassandra.db.partitions.ImmutableBTreePartition; import org.apache.cassandra.db.partitions.UnfilteredPartitionIterator; import org.apache.cassandra.db.rows.*; import org.apache.cassandra.db.transform.Transformation; import org.apache.cassandra.exceptions.InvalidRequestException; import org.apache.cassandra.io.util.DataInputPlus; import org.apache.cassandra.io.util.DataOutputPlus; -import org.apache.cassandra.net.MessagingService; +import org.apache.cassandra.schema.ColumnMetadata; import org.apache.cassandra.schema.IndexMetadata; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; @@ -83,21 +80,21 @@ public abstract class RowFilter implements Iterable return new CQLFilter(new ArrayList<>(capacity)); } -public SimpleExpression add(ColumnDefinition def, Operator op, ByteBuffer value) +public SimpleExpression add(ColumnMetadata def, Operator op, ByteBuffer value) { SimpleExpression expression = new SimpleExpression(def, op, value); add(expression); return expression; } -public void addMapEquality(ColumnDefinition def, ByteBuffer key, Operator op, ByteBuffer value) +public void addMapEquality(ColumnMetadata def, ByteBuffer key, Operator op, ByteBuffer value) { add(new MapEqualityExpression(def, key, op, value)); } -public void addCustomIndexExpression(CFMetaData cfm, IndexMetadata targetIndex, ByteBuffer value) +public void addCustomIndexExpression(TableMetadata metadata, IndexMetadata targetIndex, ByteBuffer value) { -add(new CustomExpression(cfm, targetIndex, value)); +add(new CustomExpression(metadata, targetIndex, value)); } private void add(Expression expression) @@ -135,7 +132,7 @@ public abstract class RowFilter implements Iterable * @param nowInSec the current time in seconds (to know what is live and what isn't). * @return {@code true} if {@code row} in partition {@code partitionKey} satisfies this row filter. */ -public boolean isSatisfiedBy(CFMetaData metadata, DecoratedKey partitionKey, Row row, int nowInSec) +public boolean isSatisfiedBy(TableMetadata metadata, DecoratedKey partitionKey, Row row, int nowInSec) { // We purge all tombstones as the expressions isSatisfiedBy methods expects it Row purged = row.purge(DeletionPurger.PURGE_ALL, nowInSec); @@ -249,7 +246,7 @@ public abstract class RowFilter implements Iterable if (expressions.isEmpty()) return iter; -final CFMetaData metadata = iter.metadata(); +final TableMetadata metadata = iter.metadata(); List partitionLevelExpressions = new ArrayList<>(); List rowLevelExpressions = new ArrayList<>(); @@ -323,11 +320,11 @@ public abstract class RowFilter implements Iterable protected enum Kind { SIMPLE, MAP_EQUALITY, UNUSED1, CUSTOM, USER } protected abstract Kind kind(); -protected final ColumnDefinition column; +protected final ColumnMetadata column; protected final Operator operator; protected final ByteBuffer value; -protected Expression(ColumnDefinition column, Operator operator, ByteBuffer value) +protected Expression(ColumnMetadata column, Operator operator, ByteBuffer value) { this.column = column; this.operator = operator; @@ -344,7 +341,7 @@ public abstract class RowFilter implements Iterable return kind() == Kind.USER; } -public ColumnDefinition column() +public ColumnMetadata column() { return column; } @@ -401,19 +398,21 @@ public abstract class RowFilter implements Iterable /** * Returns whether the provided row satisfied this expression or not. * +
[36/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/cache/OHCProvider.java -- diff --git a/src/java/org/apache/cassandra/cache/OHCProvider.java b/src/java/org/apache/cassandra/cache/OHCProvider.java index 6f75c74..a465bd9 100644 --- a/src/java/org/apache/cassandra/cache/OHCProvider.java +++ b/src/java/org/apache/cassandra/cache/OHCProvider.java @@ -20,6 +20,7 @@ package org.apache.cassandra.cache; import java.io.IOException; import java.nio.ByteBuffer; import java.util.Iterator; +import java.util.UUID; import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.db.TypeSizes; @@ -28,7 +29,7 @@ import org.apache.cassandra.io.util.DataInputBuffer; import org.apache.cassandra.io.util.DataOutputBuffer; import org.apache.cassandra.io.util.DataOutputBufferFixed; import org.apache.cassandra.io.util.RebufferingInputStream; -import org.apache.cassandra.utils.Pair; +import org.apache.cassandra.schema.TableId; import org.caffinitas.ohc.OHCache; import org.caffinitas.ohc.OHCacheBuilder; @@ -129,8 +130,8 @@ public class OHCProvider implements CacheProviderDataOutputBuffer dataOutput = new DataOutputBufferFixed(buf); try { -dataOutput.writeUTF(rowCacheKey.ksAndCFName.left); -dataOutput.writeUTF(rowCacheKey.ksAndCFName.right); +rowCacheKey.tableId.serialize(dataOutput); +dataOutput.writeUTF(rowCacheKey.indexName != null ? rowCacheKey.indexName : ""); } catch (IOException e) { @@ -144,12 +145,14 @@ public class OHCProvider implements CacheProvider { @SuppressWarnings("resource") DataInputBuffer dataInput = new DataInputBuffer(buf, false); -String ksName = null; -String cfName = null; +TableId tableId = null; +String indexName = null; try { -ksName = dataInput.readUTF(); -cfName = dataInput.readUTF(); +tableId = TableId.deserialize(dataInput); +indexName = dataInput.readUTF(); +if (indexName.isEmpty()) +indexName = null; } catch (IOException e) { @@ -157,15 +160,15 @@ public class OHCProvider implements CacheProvider } byte[] key = new byte[buf.getInt()]; buf.get(key); -return new RowCacheKey(Pair.create(ksName, cfName), key); +return new RowCacheKey(tableId, indexName, key); } public int serializedSize(RowCacheKey rowCacheKey) { -return TypeSizes.sizeof(rowCacheKey.ksAndCFName.left) -+ TypeSizes.sizeof(rowCacheKey.ksAndCFName.right) -+ 4 -+ rowCacheKey.key.length; +return rowCacheKey.tableId.serializedSize() + + TypeSizes.sizeof(rowCacheKey.indexName != null ? rowCacheKey.indexName : "") + + 4 + + rowCacheKey.key.length; } } http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/cache/RowCacheKey.java -- diff --git a/src/java/org/apache/cassandra/cache/RowCacheKey.java b/src/java/org/apache/cassandra/cache/RowCacheKey.java index e02db42..bbf289a 100644 --- a/src/java/org/apache/cassandra/cache/RowCacheKey.java +++ b/src/java/org/apache/cassandra/cache/RowCacheKey.java @@ -19,32 +19,41 @@ package org.apache.cassandra.cache; import java.nio.ByteBuffer; import java.util.Arrays; +import java.util.Objects; + +import com.google.common.annotations.VisibleForTesting; import org.apache.cassandra.db.DecoratedKey; +import org.apache.cassandra.schema.Schema; +import org.apache.cassandra.schema.TableId; +import org.apache.cassandra.schema.TableMetadata; +import org.apache.cassandra.schema.TableMetadataRef; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.ObjectSizes; -import org.apache.cassandra.utils.Pair; public final class RowCacheKey extends CacheKey { public final byte[] key; -private static final long EMPTY_SIZE = ObjectSizes.measure(new RowCacheKey(null, ByteBufferUtil.EMPTY_BYTE_BUFFER)); +private static final long EMPTY_SIZE = ObjectSizes.measure(new RowCacheKey(null, null, new byte[0])); -public RowCacheKey(Pair ksAndCFName, byte[] key) +public RowCacheKey(TableId tableId, String indexName, byte[] key) { -super(ksAndCFName); +super(tableId, indexName); this.key = key; } -public RowCacheKey(Pair ksAndCFName,
[30/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java index 43c22a3..228af33 100644 --- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java @@ -25,10 +25,11 @@ import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.cassandra.auth.Permission; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.ColumnDefinition; -import org.apache.cassandra.config.ColumnDefinition.Raw; -import org.apache.cassandra.config.ViewDefinition; +import org.apache.cassandra.schema.ColumnMetadata; +import org.apache.cassandra.schema.Schema; +import org.apache.cassandra.schema.TableMetadata; +import org.apache.cassandra.schema.ColumnMetadata.Raw; +import org.apache.cassandra.schema.ViewMetadata; import org.apache.cassandra.cql3.*; import org.apache.cassandra.cql3.conditions.ColumnCondition; import org.apache.cassandra.cql3.conditions.ColumnConditions; @@ -71,24 +72,24 @@ public abstract class ModificationStatement implements CQLStatement protected final StatementType type; private final int boundTerms; -public final CFMetaData cfm; +public final TableMetadata metadata; private final Attributes attrs; private final StatementRestrictions restrictions; private final Operations operations; -private final PartitionColumns updatedColumns; +private final RegularAndStaticColumns updatedColumns; private final Conditions conditions; -private final PartitionColumns conditionColumns; +private final RegularAndStaticColumns conditionColumns; -private final PartitionColumns requiresRead; +private final RegularAndStaticColumns requiresRead; public ModificationStatement(StatementType type, int boundTerms, - CFMetaData cfm, + TableMetadata metadata, Operations operations, StatementRestrictions restrictions, Conditions conditions, @@ -96,7 +97,7 @@ public abstract class ModificationStatement implements CQLStatement { this.type = type; this.boundTerms = boundTerms; -this.cfm = cfm; +this.metadata = metadata; this.restrictions = restrictions; this.operations = operations; this.conditions = conditions; @@ -104,17 +105,17 @@ public abstract class ModificationStatement implements CQLStatement if (!conditions.isEmpty()) { -checkFalse(cfm.isCounter(), "Conditional updates are not supported on counter tables"); +checkFalse(metadata.isCounter(), "Conditional updates are not supported on counter tables"); checkFalse(attrs.isTimestampSet(), "Cannot provide custom timestamp for conditional updates"); } -PartitionColumns.Builder conditionColumnsBuilder = PartitionColumns.builder(); -Iterable columns = conditions.getColumns(); +RegularAndStaticColumns.Builder conditionColumnsBuilder = RegularAndStaticColumns.builder(); +Iterable columns = conditions.getColumns(); if (columns != null) conditionColumnsBuilder.addAll(columns); -PartitionColumns.Builder updatedColumnsBuilder = PartitionColumns.builder(); -PartitionColumns.Builder requiresReadBuilder = PartitionColumns.builder(); +RegularAndStaticColumns.Builder updatedColumnsBuilder = RegularAndStaticColumns.builder(); +RegularAndStaticColumns.Builder requiresReadBuilder = RegularAndStaticColumns.builder(); for (Operation operation : operations) { updatedColumnsBuilder.add(operation.column); @@ -127,13 +128,13 @@ public abstract class ModificationStatement implements CQLStatement } } -PartitionColumns modifiedColumns = updatedColumnsBuilder.build(); +RegularAndStaticColumns modifiedColumns = updatedColumnsBuilder.build(); // Compact tables have not row marker. So if we don't actually update any particular column, // this means that we're only updating the PK, which we allow if only those were declared in // the definition. In that case however, we do went to write the compactValueColumn (since again // we can't use a "row marker") so add it automatically. -if (cfm.isCompactTable() && modifiedColumns.isEmpty() && updatesRegularRows()) -modifiedColumns = cfm.partitionColumns(); +if (metadata.isCompactTable()
[32/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/cql3/selection/Selection.java -- diff --git a/src/java/org/apache/cassandra/cql3/selection/Selection.java b/src/java/org/apache/cassandra/cql3/selection/Selection.java index 401442f..078438b 100644 --- a/src/java/org/apache/cassandra/cql3/selection/Selection.java +++ b/src/java/org/apache/cassandra/cql3/selection/Selection.java @@ -25,8 +25,6 @@ import com.google.common.base.Predicate; import com.google.common.collect.Iterables; import com.google.common.collect.Iterators; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.ColumnDefinition; import org.apache.cassandra.cql3.*; import org.apache.cassandra.cql3.functions.Function; import org.apache.cassandra.db.Clustering; @@ -37,6 +35,8 @@ import org.apache.cassandra.db.context.CounterContext; import org.apache.cassandra.db.marshal.UTF8Type; import org.apache.cassandra.db.rows.Cell; import org.apache.cassandra.exceptions.InvalidRequestException; +import org.apache.cassandra.schema.ColumnMetadata; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.transport.ProtocolVersion; import org.apache.cassandra.utils.ByteBufferUtil; @@ -45,28 +45,22 @@ public abstract class Selection /** * A predicate that returns true for static columns. */ -private static final Predicate STATIC_COLUMN_FILTER = new Predicate() -{ -public boolean apply(ColumnDefinition def) -{ -return def.isStatic(); -} -}; +private static final Predicate STATIC_COLUMN_FILTER = (column) -> column.isStatic(); -private final CFMetaData cfm; -private final List columns; +private final TableMetadata table; +private final List columns; private final SelectionColumnMapping columnMapping; private final ResultSet.ResultMetadata metadata; private final boolean collectTimestamps; private final boolean collectTTLs; -protected Selection(CFMetaData cfm, -List columns, +protected Selection(TableMetadata table, +List columns, SelectionColumnMapping columnMapping, boolean collectTimestamps, boolean collectTTLs) { -this.cfm = cfm; +this.table = table; this.columns = columns; this.columnMapping = columnMapping; this.metadata = new ResultSet.ResultMetadata(columnMapping.getColumnSpecifications()); @@ -86,7 +80,7 @@ public abstract class Selection */ public boolean containsStaticColumns() { -if (!cfm.hasStaticColumns()) +if (!table.hasStaticColumns()) return false; if (isWildcard()) @@ -107,7 +101,7 @@ public abstract class Selection if (isWildcard()) return false; -for (ColumnDefinition def : getColumns()) +for (ColumnMetadata def : getColumns()) { if (!def.isPartitionKey() && !def.isStatic()) return false; @@ -126,19 +120,19 @@ public abstract class Selection return new ResultSet.ResultMetadata(Arrays.asList(jsonSpec)); } -public static Selection wildcard(CFMetaData cfm) +public static Selection wildcard(TableMetadata table) { -List all = new ArrayList<>(cfm.allColumns().size()); -Iterators.addAll(all, cfm.allColumnsInSelectOrder()); -return new SimpleSelection(cfm, all, true); +List all = new ArrayList<>(table.columns().size()); +Iterators.addAll(all, table.allColumnsInSelectOrder()); +return new SimpleSelection(table, all, true); } -public static Selection forColumns(CFMetaData cfm, List columns) +public static Selection forColumns(TableMetadata table, List columns) { -return new SimpleSelection(cfm, columns, false); +return new SimpleSelection(table, columns, false); } -public int addColumnForOrdering(ColumnDefinition c) +public int addColumnForOrdering(ColumnMetadata c) { columns.add(c); metadata.addNonSerializedColumn(c); @@ -159,17 +153,17 @@ public abstract class Selection return false; } -public static Selection fromSelectors(CFMetaData cfm, List rawSelectors, VariableSpecifications boundNames, boolean hasGroupBy) +public static Selection fromSelectors(TableMetadata table, List rawSelectors, VariableSpecifications boundNames, boolean hasGroupBy) { -List defs = new ArrayList<>(); +List defs = new ArrayList<>(); SelectorFactories factories = - SelectorFactories.createFactoriesAndCollectColumnDefinitions(RawSelector.toSelectables(rawSelectors, cfm), null, cfm, defs, boundNames); -SelectionColumnMapping mapping =
[21/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/index/sasi/conf/ColumnIndex.java -- diff --git a/src/java/org/apache/cassandra/index/sasi/conf/ColumnIndex.java b/src/java/org/apache/cassandra/index/sasi/conf/ColumnIndex.java index 0958113..269fc95 100644 --- a/src/java/org/apache/cassandra/index/sasi/conf/ColumnIndex.java +++ b/src/java/org/apache/cassandra/index/sasi/conf/ColumnIndex.java @@ -28,7 +28,7 @@ import java.util.concurrent.atomic.AtomicReference; import com.google.common.annotations.VisibleForTesting; -import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.schema.ColumnMetadata; import org.apache.cassandra.cql3.Operator; import org.apache.cassandra.db.DecoratedKey; import org.apache.cassandra.db.Memtable; @@ -57,7 +57,7 @@ public class ColumnIndex private final AbstractType keyValidator; -private final ColumnDefinition column; +private final ColumnMetadata column; private final Optional config; private final AtomicReference memtable; @@ -70,7 +70,7 @@ public class ColumnIndex private final boolean isTokenized; -public ColumnIndex(AbstractType keyValidator, ColumnDefinition column, IndexMetadata metadata) +public ColumnIndex(AbstractType keyValidator, ColumnMetadata column, IndexMetadata metadata) { this.keyValidator = keyValidator; this.column = column; @@ -147,7 +147,7 @@ public class ColumnIndex tracker.update(oldSSTables, newSSTables); } -public ColumnDefinition getDefinition() +public ColumnMetadata getDefinition() { return column; } @@ -229,7 +229,7 @@ public class ColumnIndex } -public static ByteBuffer getValueOf(ColumnDefinition column, Row row, int nowInSecs) +public static ByteBuffer getValueOf(ColumnMetadata column, Row row, int nowInSecs) { if (row == null) return null; http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/index/sasi/conf/IndexMode.java -- diff --git a/src/java/org/apache/cassandra/index/sasi/conf/IndexMode.java b/src/java/org/apache/cassandra/index/sasi/conf/IndexMode.java index c66dd02..d319636 100644 --- a/src/java/org/apache/cassandra/index/sasi/conf/IndexMode.java +++ b/src/java/org/apache/cassandra/index/sasi/conf/IndexMode.java @@ -22,7 +22,7 @@ import java.util.Map; import java.util.Optional; import java.util.Set; -import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.schema.ColumnMetadata; import org.apache.cassandra.index.sasi.analyzer.AbstractAnalyzer; import org.apache.cassandra.index.sasi.analyzer.NoOpAnalyzer; import org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer; @@ -110,12 +110,12 @@ public class IndexMode } } -public static IndexMode getMode(ColumnDefinition column, Optional config) throws ConfigurationException +public static IndexMode getMode(ColumnMetadata column, Optional config) throws ConfigurationException { return getMode(column, config.isPresent() ? config.get().options : null); } -public static IndexMode getMode(ColumnDefinition column, MapindexOptions) throws ConfigurationException +public static IndexMode getMode(ColumnMetadata column, Map indexOptions) throws ConfigurationException { if (indexOptions == null || indexOptions.isEmpty()) return IndexMode.NOT_INDEXED; http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/index/sasi/disk/PerSSTableIndexWriter.java -- diff --git a/src/java/org/apache/cassandra/index/sasi/disk/PerSSTableIndexWriter.java b/src/java/org/apache/cassandra/index/sasi/disk/PerSSTableIndexWriter.java index 9fa4e87..20f6292 100644 --- a/src/java/org/apache/cassandra/index/sasi/disk/PerSSTableIndexWriter.java +++ b/src/java/org/apache/cassandra/index/sasi/disk/PerSSTableIndexWriter.java @@ -27,7 +27,7 @@ import java.util.concurrent.*; import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutor; import org.apache.cassandra.concurrent.NamedThreadFactory; -import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.schema.ColumnMetadata; import org.apache.cassandra.db.DecoratedKey; import org.apache.cassandra.db.compaction.OperationType; import org.apache.cassandra.db.rows.Row; @@ -79,10 +79,10 @@ public class PerSSTableIndexWriter implements SSTableFlushObserver private final OperationType source; private final AbstractType keyValidator; -private final Map supportedIndexes; +private final Map supportedIndexes;
[16/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/service/CacheService.java -- diff --git a/src/java/org/apache/cassandra/service/CacheService.java b/src/java/org/apache/cassandra/service/CacheService.java index e64ec75..4cb7470 100644 --- a/src/java/org/apache/cassandra/service/CacheService.java +++ b/src/java/org/apache/cassandra/service/CacheService.java @@ -32,23 +32,25 @@ import javax.management.ObjectName; import com.google.common.util.concurrent.Futures; -import org.apache.cassandra.db.lifecycle.SSTableSet; -import org.apache.cassandra.io.sstable.format.SSTableReader; import org.slf4j.Logger; import org.slf4j.LoggerFactory; + import org.apache.cassandra.cache.*; import org.apache.cassandra.cache.AutoSavingCache.CacheSerializer; import org.apache.cassandra.concurrent.Stage; import org.apache.cassandra.concurrent.StageManager; import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.db.*; -import org.apache.cassandra.db.rows.*; +import org.apache.cassandra.db.context.CounterContext; import org.apache.cassandra.db.filter.*; +import org.apache.cassandra.db.lifecycle.SSTableSet; import org.apache.cassandra.db.partitions.CachedBTreePartition; import org.apache.cassandra.db.partitions.CachedPartition; -import org.apache.cassandra.db.context.CounterContext; +import org.apache.cassandra.db.rows.*; +import org.apache.cassandra.io.sstable.format.SSTableReader; import org.apache.cassandra.io.util.DataInputPlus; import org.apache.cassandra.io.util.DataOutputPlus; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; import org.apache.cassandra.utils.Pair; @@ -265,13 +267,13 @@ public class CacheService implements CacheServiceMBean keyCache.clear(); } -public void invalidateKeyCacheForCf(PairksAndCFName) +public void invalidateKeyCacheForCf(TableMetadata tableMetadata) { Iterator keyCacheIterator = keyCache.keyIterator(); while (keyCacheIterator.hasNext()) { KeyCacheKey key = keyCacheIterator.next(); -if (key.ksAndCFName.equals(ksAndCFName)) +if (key.sameTable(tableMetadata)) keyCacheIterator.remove(); } } @@ -281,24 +283,24 @@ public class CacheService implements CacheServiceMBean rowCache.clear(); } -public void invalidateRowCacheForCf(Pair ksAndCFName) +public void invalidateRowCacheForCf(TableMetadata tableMetadata) { Iterator rowCacheIterator = rowCache.keyIterator(); while (rowCacheIterator.hasNext()) { -RowCacheKey rowCacheKey = rowCacheIterator.next(); -if (rowCacheKey.ksAndCFName.equals(ksAndCFName)) +RowCacheKey key = rowCacheIterator.next(); +if (key.sameTable(tableMetadata)) rowCacheIterator.remove(); } } -public void invalidateCounterCacheForCf(Pair ksAndCFName) +public void invalidateCounterCacheForCf(TableMetadata tableMetadata) { Iterator counterCacheIterator = counterCache.keyIterator(); while (counterCacheIterator.hasNext()) { -CounterCacheKey counterCacheKey = counterCacheIterator.next(); -if (counterCacheKey.ksAndCFName.equals(ksAndCFName)) +CounterCacheKey key = counterCacheIterator.next(); +if (key.sameTable(tableMetadata)) counterCacheIterator.remove(); } } @@ -353,8 +355,10 @@ public class CacheService implements CacheServiceMBean { public void serialize(CounterCacheKey key, DataOutputPlus out, ColumnFamilyStore cfs) throws IOException { -assert(cfs.metadata.isCounter()); -out.write(cfs.metadata.ksAndCFBytes); +assert(cfs.metadata().isCounter()); +TableMetadata tableMetadata = cfs.metadata(); +tableMetadata.id.serialize(out); +out.writeUTF(tableMetadata.indexName().orElse("")); key.write(out); } @@ -362,8 +366,10 @@ public class CacheService implements CacheServiceMBean { //Keyspace and CF name are deserialized by AutoSaving cache and used to fetch the CFS provided as a //parameter so they aren't deserialized here, even though they are serialized by this serializer -final CounterCacheKey cacheKey = CounterCacheKey.read(cfs.metadata.ksAndCFName, in); -if (cfs == null || !cfs.metadata.isCounter() || !cfs.isCounterCacheEnabled()) +if (cfs == null) +return null; +final CounterCacheKey cacheKey = CounterCacheKey.read(cfs.metadata(), in); +if (!cfs.metadata().isCounter() ||
[22/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java -- diff --git a/src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java b/src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java index edbda1c..e276f62 100644 --- a/src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java +++ b/src/java/org/apache/cassandra/db/view/ViewUpdateGenerator.java @@ -23,8 +23,9 @@ import java.util.*; import com.google.common.collect.Iterators; import com.google.common.collect.PeekingIterator; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.schema.ColumnMetadata; +import org.apache.cassandra.schema.Schema; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.db.*; import org.apache.cassandra.db.rows.*; import org.apache.cassandra.db.partitions.*; @@ -45,11 +46,11 @@ public class ViewUpdateGenerator private final View view; private final int nowInSec; -private final CFMetaData baseMetadata; +private final TableMetadata baseMetadata; private final DecoratedKey baseDecoratedKey; private final ByteBuffer[] basePartitionKey; -private final CFMetaData viewMetadata; +private final TableMetadata viewMetadata; private final Mapupdates = new HashMap<>(); @@ -87,9 +88,9 @@ public class ViewUpdateGenerator this.baseMetadata = view.getDefinition().baseTableMetadata(); this.baseDecoratedKey = basePartitionKey; -this.basePartitionKey = extractKeyComponents(basePartitionKey, baseMetadata.getKeyValidator()); +this.basePartitionKey = extractKeyComponents(basePartitionKey, baseMetadata.partitionKeyType); -this.viewMetadata = view.getDefinition().metadata; +this.viewMetadata = Schema.instance.getTableMetadata(view.getDefinition().metadata.id); this.currentViewEntryPartitionKey = new ByteBuffer[viewMetadata.partitionKeyColumns().size()]; this.currentViewEntryBuilder = BTreeRow.sortedBuilder(); @@ -191,7 +192,7 @@ public class ViewUpdateGenerator : (mergedHasLiveData ? UpdateAction.NEW_ENTRY : UpdateAction.NONE); } -ColumnDefinition baseColumn = view.baseNonPKColumnsInViewPK.get(0); +ColumnMetadata baseColumn = view.baseNonPKColumnsInViewPK.get(0); assert !baseColumn.isComplex() : "A complex column couldn't be part of the view PK"; Cell before = existingBaseRow == null ? null : existingBaseRow.getCell(baseColumn); Cell after = mergedBaseRow.getCell(baseColumn); @@ -237,7 +238,7 @@ public class ViewUpdateGenerator for (ColumnData data : baseRow) { -ColumnDefinition viewColumn = view.getViewColumn(data.column()); +ColumnMetadata viewColumn = view.getViewColumn(data.column()); // If that base table column is not denormalized in the view, we had nothing to do. // Alose, if it's part of the view PK it's already been taken into account in the clustering. if (viewColumn == null || viewColumn.isPrimaryKeyColumn()) @@ -293,8 +294,8 @@ public class ViewUpdateGenerator PeekingIterator existingIter = Iterators.peekingIterator(existingBaseRow.iterator()); for (ColumnData mergedData : mergedBaseRow) { -ColumnDefinition baseColumn = mergedData.column(); -ColumnDefinition viewColumn = view.getViewColumn(baseColumn); +ColumnMetadata baseColumn = mergedData.column(); +ColumnMetadata viewColumn = view.getViewColumn(baseColumn); // If that base table column is not denormalized in the view, we had nothing to do. // Alose, if it's part of the view PK it's already been taken into account in the clustering. if (viewColumn == null || viewColumn.isPrimaryKeyColumn()) @@ -397,9 +398,9 @@ public class ViewUpdateGenerator private void startNewUpdate(Row baseRow) { ByteBuffer[] clusteringValues = new ByteBuffer[viewMetadata.clusteringColumns().size()]; -for (ColumnDefinition viewColumn : viewMetadata.primaryKeyColumns()) +for (ColumnMetadata viewColumn : viewMetadata.primaryKeyColumns()) { -ColumnDefinition baseColumn = view.getBaseColumn(viewColumn); +ColumnMetadata baseColumn = view.getBaseColumn(viewColumn); ByteBuffer value = getValueForPK(baseColumn, baseRow); if (viewColumn.isPartitionKey()) currentViewEntryPartitionKey[viewColumn.position()] = value; @@ -457,7 +458,7 @@ public class ViewUpdateGenerator : LivenessInfo.withExpirationTime(baseLiveness.timestamp(), ttl, expirationTime); } -ColumnDefinition baseColumn =
[10/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java b/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java index 975eb8e..30fbb0d 100644 --- a/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java +++ b/test/unit/org/apache/cassandra/cql3/selection/SelectionColumnMappingTest.java @@ -26,9 +26,9 @@ import java.util.List; import org.junit.BeforeClass; import org.junit.Test; -import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.schema.ColumnMetadata; import org.apache.cassandra.config.DatabaseDescriptor; -import org.apache.cassandra.config.Schema; +import org.apache.cassandra.schema.Schema; import org.apache.cassandra.cql3.*; import org.apache.cassandra.cql3.statements.SelectStatement; import org.apache.cassandra.db.marshal.*; @@ -45,7 +45,7 @@ import static org.junit.Assert.assertTrue; public class SelectionColumnMappingTest extends CQLTester { -private static final ColumnDefinition NULL_DEF = null; +private static final ColumnMetadata NULL_DEF = null; String tableName; String typeName; UserType userType; @@ -71,7 +71,7 @@ public class SelectionColumnMappingTest extends CQLTester " v1 int," + " v2 ascii," + " v3 frozen<" + typeName + ">)"); -userType = Schema.instance.getKSMetaData(KEYSPACE).types.get(ByteBufferUtil.bytes(typeName)).get().freeze(); +userType = Schema.instance.getKeyspaceMetadata(KEYSPACE).types.get(ByteBufferUtil.bytes(typeName)).get().freeze(); functionName = createFunction(KEYSPACE, "int, ascii", "CREATE FUNCTION %s (i int, a ascii) " + "CALLED ON NULL INPUT " + @@ -130,7 +130,7 @@ public class SelectionColumnMappingTest extends CQLTester private void testSimpleTypes() throws Throwable { // simple column identifiers without aliases are represented in -// ResultSet.Metadata by the underlying ColumnDefinition +// ResultSet.Metadata by the underlying ColumnMetadata ColumnSpecification kSpec = columnSpecification("k", Int32Type.instance); ColumnSpecification v1Spec = columnSpecification("v1", Int32Type.instance); ColumnSpecification v2Spec = columnSpecification("v2", AsciiType.instance); @@ -144,12 +144,12 @@ public class SelectionColumnMappingTest extends CQLTester private void testWildcard() throws Throwable { -// Wildcard select represents each column in the table with a ColumnDefinition +// Wildcard select represents each column in the table with a ColumnMetadata // in the ResultSet metadata -ColumnDefinition kSpec = columnDefinition("k"); -ColumnDefinition v1Spec = columnDefinition("v1"); -ColumnDefinition v2Spec = columnDefinition("v2"); -ColumnDefinition v3Spec = columnDefinition("v3"); +ColumnMetadata kSpec = columnDefinition("k"); +ColumnMetadata v1Spec = columnDefinition("v1"); +ColumnMetadata v2Spec = columnDefinition("v2"); +ColumnMetadata v3Spec = columnDefinition("v3"); SelectionColumnMapping expected = SelectionColumnMapping.newMapping() .addMapping(kSpec, columnDefinition("k")) .addMapping(v1Spec, columnDefinition("v1")) @@ -162,7 +162,7 @@ public class SelectionColumnMappingTest extends CQLTester private void testSimpleTypesWithAliases() throws Throwable { // simple column identifiers with aliases are represented in ResultSet.Metadata -// by a ColumnSpecification based on the underlying ColumnDefinition +// by a ColumnSpecification based on the underlying ColumnMetadata ColumnSpecification kSpec = columnSpecification("k_alias", Int32Type.instance); ColumnSpecification v1Spec = columnSpecification("v1_alias", Int32Type.instance); ColumnSpecification v2Spec = columnSpecification("v2_alias", AsciiType.instance); @@ -406,7 +406,7 @@ public class SelectionColumnMappingTest extends CQLTester private void testMultipleUnaliasedSelectionOfSameColumn() throws Throwable { // simple column identifiers without aliases are represented in -// ResultSet.Metadata by the underlying ColumnDefinition +// ResultSet.Metadata by the underlying ColumnMetadata SelectionColumns expected = SelectionColumnMapping.newMapping() .addMapping(columnSpecification("v1",
[25/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java -- diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java index d917884..0716d47 100644 --- a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java +++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java @@ -37,6 +37,9 @@ import org.apache.cassandra.db.commitlog.CommitLog.Configuration; import org.apache.cassandra.db.partitions.PartitionUpdate; import org.apache.cassandra.io.FSWriteError; import org.apache.cassandra.io.util.FileUtils; +import org.apache.cassandra.schema.Schema; +import org.apache.cassandra.schema.TableId; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.utils.CLibrary; import org.apache.cassandra.utils.IntegerInterval; import org.apache.cassandra.utils.concurrent.OpOrder; @@ -100,10 +103,10 @@ public abstract class CommitLogSegment private final WaitQueue syncComplete = new WaitQueue(); // a map of Cf->dirty interval in this segment; if interval is not covered by the clean set, the log contains unflushed data -private final NonBlockingHashMapcfDirty = new NonBlockingHashMap<>(1024); +private final NonBlockingHashMap tableDirty = new NonBlockingHashMap<>(1024); // a map of Cf->clean intervals; separate map from above to permit marking Cfs clean whilst the log is still in use -private final ConcurrentHashMap cfClean = new ConcurrentHashMap<>(); +private final ConcurrentHashMap tableClean = new ConcurrentHashMap<>(); public final long id; @@ -475,27 +478,27 @@ public abstract class CommitLogSegment void markDirty(Mutation mutation, int allocatedPosition) { for (PartitionUpdate update : mutation.getPartitionUpdates()) -coverInMap(cfDirty, update.metadata().cfId, allocatedPosition); +coverInMap(tableDirty, update.metadata().id, allocatedPosition); } /** - * Marks the ColumnFamily specified by cfId as clean for this log segment. If the + * Marks the ColumnFamily specified by id as clean for this log segment. If the * given context argument is contained in this file, it will only mark the CF as * clean if no newer writes have taken place. * - * @param cfId the column family ID that is now clean + * @param tableIdthe table that is now clean * @param startPosition the start of the range that is clean * @param endPositionthe end of the range that is clean */ -public synchronized void markClean(UUID cfId, CommitLogPosition startPosition, CommitLogPosition endPosition) +public synchronized void markClean(TableId tableId, CommitLogPosition startPosition, CommitLogPosition endPosition) { if (startPosition.segmentId > id || endPosition.segmentId < id) return; -if (!cfDirty.containsKey(cfId)) +if (!tableDirty.containsKey(tableId)) return; int start = startPosition.segmentId == id ? startPosition.position : 0; int end = endPosition.segmentId == id ? endPosition.position : Integer.MAX_VALUE; -cfClean.computeIfAbsent(cfId, k -> new IntegerInterval.Set()).add(start, end); +tableClean.computeIfAbsent(tableId, k -> new IntegerInterval.Set()).add(start, end); removeCleanFromDirty(); } @@ -505,16 +508,16 @@ public abstract class CommitLogSegment if (isStillAllocating()) return; -Iterator > iter = cfClean.entrySet().iterator(); +Iterator > iter = tableClean.entrySet().iterator(); while (iter.hasNext()) { -Map.Entry clean = iter.next(); -UUID cfId = clean.getKey(); +Map.Entry clean = iter.next(); +TableId tableId = clean.getKey(); IntegerInterval.Set cleanSet = clean.getValue(); -IntegerInterval dirtyInterval = cfDirty.get(cfId); +IntegerInterval dirtyInterval = tableDirty.get(tableId); if (dirtyInterval != null && cleanSet.covers(dirtyInterval)) { -cfDirty.remove(cfId); +tableDirty.remove(tableId); iter.remove(); } } @@ -523,17 +526,17 @@ public abstract class CommitLogSegment /** * @return a collection of dirty CFIDs for this segment file. */ -public synchronized Collection getDirtyCFIDs() +public synchronized Collection getDirtyTableIds() { -if (cfClean.isEmpty()
[11/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictionsTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictionsTest.java b/test/unit/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictionsTest.java index 83c00d0..35adff3 100644 --- a/test/unit/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictionsTest.java +++ b/test/unit/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictionsTest.java @@ -24,8 +24,8 @@ import com.google.common.collect.Iterables; import org.junit.BeforeClass; import org.junit.Test; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.schema.ColumnMetadata; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.cql3.*; import org.apache.cassandra.cql3.Term.MultiItemTerminal; @@ -52,9 +52,9 @@ public class ClusteringColumnRestrictionsTest @Test public void testBoundsAsClusteringWithNoRestrictions() { -CFMetaData cfMetaData = newCFMetaData(Sort.ASC); +TableMetadata tableMetadata = newTableMetadata(Sort.ASC); -ClusteringColumnRestrictions restrictions = new ClusteringColumnRestrictions(cfMetaData); +ClusteringColumnRestrictions restrictions = new ClusteringColumnRestrictions(tableMetadata); SortedSet bounds = restrictions.boundsAsClustering(Bound.START, QueryOptions.DEFAULT); assertEquals(1, bounds.size()); @@ -71,12 +71,12 @@ public class ClusteringColumnRestrictionsTest @Test public void testBoundsAsClusteringWithOneEqRestrictionsAndOneClusteringColumn() { -CFMetaData cfMetaData = newCFMetaData(Sort.ASC); +TableMetadata tableMetadata = newTableMetadata(Sort.ASC); ByteBuffer clustering_0 = ByteBufferUtil.bytes(1); -Restriction eq = newSingleEq(cfMetaData, 0, clustering_0); +Restriction eq = newSingleEq(tableMetadata, 0, clustering_0); -ClusteringColumnRestrictions restrictions = new ClusteringColumnRestrictions(cfMetaData); +ClusteringColumnRestrictions restrictions = new ClusteringColumnRestrictions(tableMetadata); restrictions = restrictions.mergeWith(eq); SortedSet bounds = restrictions.boundsAsClustering(Bound.START, QueryOptions.DEFAULT); @@ -94,12 +94,12 @@ public class ClusteringColumnRestrictionsTest @Test public void testBoundsAsClusteringWithOneEqRestrictionsAndTwoClusteringColumns() { -CFMetaData cfMetaData = newCFMetaData(Sort.ASC, Sort.ASC); +TableMetadata tableMetadata = newTableMetadata(Sort.ASC, Sort.ASC); ByteBuffer clustering_0 = ByteBufferUtil.bytes(1); -Restriction eq = newSingleEq(cfMetaData, 0, clustering_0); +Restriction eq = newSingleEq(tableMetadata, 0, clustering_0); -ClusteringColumnRestrictions restrictions = new ClusteringColumnRestrictions(cfMetaData); +ClusteringColumnRestrictions restrictions = new ClusteringColumnRestrictions(tableMetadata); restrictions = restrictions.mergeWith(eq); SortedSet bounds = restrictions.boundsAsClustering(Bound.START, QueryOptions.DEFAULT); @@ -121,11 +121,11 @@ public class ClusteringColumnRestrictionsTest ByteBuffer value2 = ByteBufferUtil.bytes(2); ByteBuffer value3 = ByteBufferUtil.bytes(3); -CFMetaData cfMetaData = newCFMetaData(Sort.ASC, Sort.ASC); +TableMetadata tableMetadata = newTableMetadata(Sort.ASC, Sort.ASC); -Restriction in = newSingleIN(cfMetaData, 0, value1, value2, value3); +Restriction in = newSingleIN(tableMetadata, 0, value1, value2, value3); -ClusteringColumnRestrictions restrictions = new ClusteringColumnRestrictions(cfMetaData); +ClusteringColumnRestrictions restrictions = new ClusteringColumnRestrictions(tableMetadata); restrictions = restrictions.mergeWith(in); SortedSet bounds = restrictions.boundsAsClustering(Bound.START, QueryOptions.DEFAULT); @@ -147,13 +147,13 @@ public class ClusteringColumnRestrictionsTest @Test public void testBoundsAsClusteringWithSliceRestrictionsAndOneClusteringColumn() { -CFMetaData cfMetaData = newCFMetaData(Sort.ASC, Sort.ASC); +TableMetadata tableMetadata = newTableMetadata(Sort.ASC, Sort.ASC); ByteBuffer value1 = ByteBufferUtil.bytes(1); ByteBuffer value2 = ByteBufferUtil.bytes(2); -Restriction slice = newSingleSlice(cfMetaData, 0, Bound.START, false, value1); -ClusteringColumnRestrictions restrictions = new ClusteringColumnRestrictions(cfMetaData); +Restriction slice = newSingleSlice(tableMetadata, 0, Bound.START, false, value1); +
[26/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/db/SystemKeyspace.java -- diff --git a/src/java/org/apache/cassandra/db/SystemKeyspace.java b/src/java/org/apache/cassandra/db/SystemKeyspace.java index 5b63ba6..e826dd8 100644 --- a/src/java/org/apache/cassandra/db/SystemKeyspace.java +++ b/src/java/org/apache/cassandra/db/SystemKeyspace.java @@ -38,34 +38,34 @@ import com.google.common.io.ByteStreams; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.ColumnDefinition; import org.apache.cassandra.config.DatabaseDescriptor; -import org.apache.cassandra.config.SchemaConstants; import org.apache.cassandra.cql3.QueryProcessor; import org.apache.cassandra.cql3.UntypedResultSet; import org.apache.cassandra.cql3.functions.*; +import org.apache.cassandra.cql3.statements.CreateTableStatement; import org.apache.cassandra.db.commitlog.CommitLogPosition; import org.apache.cassandra.db.compaction.CompactionHistoryTabularData; import org.apache.cassandra.db.marshal.*; -import org.apache.cassandra.db.rows.Rows; import org.apache.cassandra.db.partitions.PartitionUpdate; +import org.apache.cassandra.db.rows.Rows; import org.apache.cassandra.dht.*; import org.apache.cassandra.exceptions.ConfigurationException; -import org.apache.cassandra.io.sstable.Descriptor; import org.apache.cassandra.io.util.*; import org.apache.cassandra.locator.IEndpointSnitch; import org.apache.cassandra.metrics.RestorableMeter; import org.apache.cassandra.net.MessagingService; import org.apache.cassandra.schema.*; +import org.apache.cassandra.schema.SchemaConstants; import org.apache.cassandra.service.StorageService; import org.apache.cassandra.service.paxos.Commit; import org.apache.cassandra.service.paxos.PaxosState; import org.apache.cassandra.transport.ProtocolVersion; import org.apache.cassandra.utils.*; +import static java.lang.String.format; import static java.util.Collections.emptyMap; import static java.util.Collections.singletonMap; + import static org.apache.cassandra.cql3.QueryProcessor.executeInternal; import static org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal; @@ -102,187 +102,203 @@ public final class SystemKeyspace public static final String BUILT_VIEWS = "built_views"; public static final String PREPARED_STATEMENTS = "prepared_statements"; -public static final CFMetaData Batches = -compile(BATCHES, -"batches awaiting replay", -"CREATE TABLE %s (" -+ "id timeuuid," -+ "mutations list," -+ "version int," -+ "PRIMARY KEY ((id)))") -.copy(new LocalPartitioner(TimeUUIDType.instance)) - .compaction(CompactionParams.scts(singletonMap("min_threshold", "2"))) -.gcGraceSeconds(0); - -private static final CFMetaData Paxos = -compile(PAXOS, -"in-progress paxos proposals", -"CREATE TABLE %s (" -+ "row_key blob," -+ "cf_id UUID," -+ "in_progress_ballot timeuuid," -+ "most_recent_commit blob," -+ "most_recent_commit_at timeuuid," -+ "most_recent_commit_version int," -+ "proposal blob," -+ "proposal_ballot timeuuid," -+ "proposal_version int," -+ "PRIMARY KEY ((row_key), cf_id))") -.compaction(CompactionParams.lcs(emptyMap())); - -private static final CFMetaData BuiltIndexes = -compile(BUILT_INDEXES, -"built column indexes", -"CREATE TABLE \"%s\" (" -+ "table_name text," // table_name here is the name of the keyspace - don't be fooled -+ "index_name text," -+ "PRIMARY KEY ((table_name), index_name)) " -+ "WITH COMPACT STORAGE"); - -private static final CFMetaData Local = -compile(LOCAL, -"information about the local node", -"CREATE TABLE %s (" -+ "key text," -+ "bootstrapped text," -+ "broadcast_address inet," -+ "cluster_name text," -+ "cql_version text," -+ "data_center text," -+ "gossip_generation int," -+ "host_id uuid," -+ "listen_address inet," -+ "native_protocol_version text," -+ "partitioner text," -+ "rack text," -+ "release_version text," -+ "rpc_address inet," -+ "schema_version uuid," -+ "tokens set," -+ "truncated_at map," -+ "PRIMARY KEY ((key)))" -
[17/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/src/java/org/apache/cassandra/schema/TableMetadata.java -- diff --git a/src/java/org/apache/cassandra/schema/TableMetadata.java b/src/java/org/apache/cassandra/schema/TableMetadata.java new file mode 100644 index 000..44b1f8a --- /dev/null +++ b/src/java/org/apache/cassandra/schema/TableMetadata.java @@ -0,0 +1,956 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.schema; + +import java.nio.ByteBuffer; +import java.util.*; +import java.util.Objects; + +import com.google.common.base.MoreObjects; +import com.google.common.collect.*; + +import org.apache.cassandra.auth.DataResource; +import org.apache.cassandra.config.DatabaseDescriptor; +import org.apache.cassandra.cql3.ColumnIdentifier; +import org.apache.cassandra.db.*; +import org.apache.cassandra.db.marshal.*; +import org.apache.cassandra.dht.IPartitioner; +import org.apache.cassandra.exceptions.ConfigurationException; +import org.apache.cassandra.utils.AbstractIterator; +import org.github.jamm.Unmetered; + +import static java.lang.String.format; +import static java.util.stream.Collectors.toList; +import static java.util.stream.Collectors.toSet; + +import static com.google.common.collect.Iterables.transform; +import static org.apache.cassandra.schema.IndexMetadata.isNameValid; + +@Unmetered +public final class TableMetadata +{ +public enum Flag +{ +SUPER, COUNTER, DENSE, COMPOUND; + +public static Set fromStringSet(Set strings) +{ +return strings.stream().map(String::toUpperCase).map(Flag::valueOf).collect(toSet()); +} + +public static Set toStringSet(Set flags) +{ +return flags.stream().map(Flag::toString).map(String::toLowerCase).collect(toSet()); +} +} + +public final String keyspace; +public final String name; +public final TableId id; + +public final IPartitioner partitioner; +public final TableParams params; +public final ImmutableSet flags; + +private final boolean isView; +private final String indexName; // derived from table name + +/* + * All CQL3 columns definition are stored in the columns map. + * On top of that, we keep separated collection of each kind of definition, to + * 1) allow easy access to each kind and + * 2) for the partition key and clustering key ones, those list are ordered by the "component index" of the elements. + */ +public final ImmutableMapdroppedColumns; +final ImmutableMap columns; + +private final ImmutableList partitionKeyColumns; +private final ImmutableList clusteringColumns; +private final RegularAndStaticColumns regularAndStaticColumns; + +public final Indexes indexes; +public final Triggers triggers; + +// derived automatically from flags and columns +public final AbstractType partitionKeyType; +public final ClusteringComparator comparator; + +/* + * For dense tables, this alias the single non-PK column the table contains (since it can only have one). We keep + * that as convenience to access that column more easily (but we could replace calls by regularAndStaticColumns().iterator().next() + * for those tables in practice). + */ +public final ColumnMetadata compactValueColumn; + +// performance hacks; TODO see if all are really necessary +public final DataResource resource; + +private TableMetadata(Builder builder) +{ +keyspace = builder.keyspace; +name = builder.name; +id = builder.id; + +partitioner = builder.partitioner; +params = builder.params.build(); +flags = Sets.immutableEnumSet(builder.flags); +isView = builder.isView; + +indexName = name.contains(".") + ? name.substring(name.indexOf('.') + 1) + : null; + +droppedColumns = ImmutableMap.copyOf(builder.droppedColumns); +Collections.sort(builder.partitionKeyColumns); +partitionKeyColumns =
[09/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/db/ColumnsTest.java -- diff --git a/test/unit/org/apache/cassandra/db/ColumnsTest.java b/test/unit/org/apache/cassandra/db/ColumnsTest.java index d64a5bd..ae0bbd8 100644 --- a/test/unit/org/apache/cassandra/db/ColumnsTest.java +++ b/test/unit/org/apache/cassandra/db/ColumnsTest.java @@ -30,14 +30,14 @@ import org.junit.AfterClass; import org.junit.Test; import junit.framework.Assert; -import org.apache.cassandra.MockSchema; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.ColumnDefinition; import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.db.marshal.SetType; import org.apache.cassandra.db.marshal.UTF8Type; import org.apache.cassandra.io.util.DataInputBuffer; import org.apache.cassandra.io.util.DataOutputBuffer; +import org.apache.cassandra.schema.ColumnMetadata; +import org.apache.cassandra.schema.MockSchema; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.utils.btree.BTreeSet; import static org.apache.cassandra.utils.ByteBufferUtil.bytes; @@ -49,7 +49,7 @@ public class ColumnsTest DatabaseDescriptor.daemonInitialization(); } -private static final CFMetaData cfMetaData = MockSchema.newCFS().metadata; +private static final TableMetadata TABLE_METADATA = MockSchema.newCFS().metadata(); // this tests most of our functionality, since each subset we perform // reasonably comprehensive tests of basic functionality against @@ -64,8 +64,8 @@ public class ColumnsTest { // pick some arbitrary groupings of columns to remove at-once (to avoid factorial complexity) // whatever is left after each removal, we perform this logic on again, recursively -ListremoveGroups = shuffleAndGroup(Lists.newArrayList(input.definitions)); -for (List defs : removeGroups) +List
removeGroups = shuffleAndGroup(Lists.newArrayList(input.definitions)); +for (List defs : removeGroups) { ColumnsCheck subset = input.remove(defs); @@ -77,7 +77,7 @@ public class ColumnsTest // test .mergeTo Columns otherSubset = input.columns; -for (ColumnDefinition def : subset.definitions) +for (ColumnMetadata def : subset.definitions) { otherSubset = otherSubset.without(def); assertContents(otherSubset.mergeTo(subset.columns), input.definitions); @@ -102,7 +102,7 @@ public class ColumnsTest testSerialize(randomColumns.columns, randomColumns.definitions); } -private void testSerialize(Columns columns, List definitions) throws IOException +private void testSerialize(Columns columns, List definitions) throws IOException { try (DataOutputBuffer out = new DataOutputBuffer()) { @@ -136,7 +136,7 @@ public class ColumnsTest for (int i = 0; i < 50; i++) names.add("clustering_" + i); -List defs = new ArrayList<>(); +List defs = new ArrayList<>(); addClustering(names, defs); Columns columns = Columns.from(new HashSet<>(defs)); @@ -153,8 +153,8 @@ public class ColumnsTest { testSerializeSubset(input.columns, input.columns, input.definitions); testSerializeSubset(input.columns, Columns.NONE, Collections.emptyList()); -List
removeGroups = shuffleAndGroup(Lists.newArrayList(input.definitions)); -for (List defs : removeGroups) +List
removeGroups = shuffleAndGroup(Lists.newArrayList(input.definitions)); +for (List defs : removeGroups) { Collections.sort(defs); ColumnsCheck subset = input.remove(defs); @@ -162,7 +162,7 @@ public class ColumnsTest } } -private void testSerializeSubset(Columns superset, Columns subset, List subsetDefinitions) throws IOException +private void testSerializeSubset(Columns superset, Columns subset, List subsetDefinitions) throws IOException { try (DataOutputBuffer out = new DataOutputBuffer()) { @@ -175,17 +175,17 @@ public class ColumnsTest } } -private static void assertContents(Columns columns, List defs) +private static void assertContents(Columns columns, List defs) { Assert.assertEquals(defs, Lists.newArrayList(columns)); boolean hasSimple = false, hasComplex = false; int firstComplexIdx = 0; int i = 0; -Iterator simple = columns.simpleColumns(); -Iterator complex = columns.complexColumns(); -Iterator all = columns.iterator(); -Predicate predicate = columns.inOrderInclusionTester(); -for (ColumnDefinition def : defs) +Iterator simple = columns.simpleColumns(); +
[06/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/db/compaction/CompactionsPurgeTest.java -- diff --git a/test/unit/org/apache/cassandra/db/compaction/CompactionsPurgeTest.java b/test/unit/org/apache/cassandra/db/compaction/CompactionsPurgeTest.java index b3f1f57..eccf671 100644 --- a/test/unit/org/apache/cassandra/db/compaction/CompactionsPurgeTest.java +++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsPurgeTest.java @@ -27,7 +27,7 @@ import org.junit.Test; import org.apache.cassandra.SchemaLoader; import org.apache.cassandra.Util; -import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.cql3.statements.CreateTableStatement; import org.apache.cassandra.cql3.QueryProcessor; import org.apache.cassandra.cql3.UntypedResultSet; import org.apache.cassandra.db.*; @@ -59,23 +59,27 @@ public class CompactionsPurgeTest public static void defineSchema() throws ConfigurationException { SchemaLoader.prepareServer(); + SchemaLoader.createKeyspace(KEYSPACE1, KeyspaceParams.simple(1), SchemaLoader.standardCFMD(KEYSPACE1, CF_STANDARD1), SchemaLoader.standardCFMD(KEYSPACE1, CF_STANDARD2)); + SchemaLoader.createKeyspace(KEYSPACE2, KeyspaceParams.simple(1), SchemaLoader.standardCFMD(KEYSPACE2, CF_STANDARD1)); + SchemaLoader.createKeyspace(KEYSPACE_CACHED, KeyspaceParams.simple(1), SchemaLoader.standardCFMD(KEYSPACE_CACHED, CF_CACHED).caching(CachingParams.CACHE_EVERYTHING)); + SchemaLoader.createKeyspace(KEYSPACE_CQL, KeyspaceParams.simple(1), -CFMetaData.compile("CREATE TABLE " + CF_CQL + " (" -+ "k int PRIMARY KEY," -+ "v1 text," -+ "v2 int" -+ ")", KEYSPACE_CQL)); +CreateTableStatement.parse("CREATE TABLE " + CF_CQL + " (" + + "k int PRIMARY KEY," + + "v1 text," + + "v2 int" + + ")", KEYSPACE_CQL)); } @Test @@ -92,7 +96,7 @@ public class CompactionsPurgeTest // inserts for (int i = 0; i < 10; i++) { -RowUpdateBuilder builder = new RowUpdateBuilder(cfs.metadata, 0, key); +RowUpdateBuilder builder = new RowUpdateBuilder(cfs.metadata(), 0, key); builder.clustering(String.valueOf(i)) .add("val", ByteBufferUtil.EMPTY_BYTE_BUFFER) .build().applyUnsafe(); @@ -103,12 +107,12 @@ public class CompactionsPurgeTest // deletes for (int i = 0; i < 10; i++) { -RowUpdateBuilder.deleteRow(cfs.metadata, 1, key, String.valueOf(i)).applyUnsafe(); +RowUpdateBuilder.deleteRow(cfs.metadata(), 1, key, String.valueOf(i)).applyUnsafe(); } cfs.forceBlockingFlush(); // resurrect one column -RowUpdateBuilder builder = new RowUpdateBuilder(cfs.metadata, 2, key); +RowUpdateBuilder builder = new RowUpdateBuilder(cfs.metadata(), 2, key); builder.clustering(String.valueOf(5)) .add("val", ByteBufferUtil.EMPTY_BYTE_BUFFER) .build().applyUnsafe(); @@ -137,7 +141,7 @@ public class CompactionsPurgeTest // inserts for (int i = 0; i < 10; i++) { -RowUpdateBuilder builder = new RowUpdateBuilder(cfs.metadata, 0, key); +RowUpdateBuilder builder = new RowUpdateBuilder(cfs.metadata(), 0, key); builder.clustering(String.valueOf(i)) .add("val", ByteBufferUtil.EMPTY_BYTE_BUFFER) .build().applyUnsafe(); @@ -147,7 +151,7 @@ public class CompactionsPurgeTest // deletes for (int i = 0; i < 10; i++) { -RowUpdateBuilder.deleteRow(cfs.metadata, Long.MAX_VALUE, key, String.valueOf(i)).applyUnsafe(); +RowUpdateBuilder.deleteRow(cfs.metadata(), Long.MAX_VALUE, key, String.valueOf(i)).applyUnsafe(); } cfs.forceBlockingFlush(); @@ -155,7 +159,7 @@ public class CompactionsPurgeTest FBUtilities.waitOnFutures(CompactionManager.instance.submitMaximal(cfs, Integer.MAX_VALUE, false)); // resurrect one column -RowUpdateBuilder builder = new
[13/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/SchemaLoader.java -- diff --git a/test/unit/org/apache/cassandra/SchemaLoader.java b/test/unit/org/apache/cassandra/SchemaLoader.java index 2bf4805..c2efb6a 100644 --- a/test/unit/org/apache/cassandra/SchemaLoader.java +++ b/test/unit/org/apache/cassandra/SchemaLoader.java @@ -21,6 +21,7 @@ import java.io.File; import java.io.IOException; import java.util.*; +import org.apache.cassandra.cql3.statements.CreateTableStatement; import org.apache.cassandra.dht.Murmur3Partitioner; import org.apache.cassandra.index.sasi.SASIIndex; import org.apache.cassandra.index.sasi.disk.OnDiskIndexBuilder; @@ -39,7 +40,7 @@ import org.apache.cassandra.gms.Gossiper; import org.apache.cassandra.index.StubIndex; import org.apache.cassandra.io.util.FileUtils; import org.apache.cassandra.schema.*; -import org.apache.cassandra.service.MigrationManager; +import org.apache.cassandra.schema.MigrationManager; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; @@ -119,44 +120,31 @@ public class SchemaLoader KeyspaceParams.simple(1), Tables.of( // Column Families -standardCFMD(ks1, "Standard1").compaction(CompactionParams.scts(compactionOptions)), -standardCFMD(ks1, "Standard2"), -standardCFMD(ks1, "Standard3"), -standardCFMD(ks1, "Standard4"), -standardCFMD(ks1, "StandardGCGS0").gcGraceSeconds(0), -standardCFMD(ks1, "StandardLong1"), -standardCFMD(ks1, "StandardLong2"), -//CFMetaData.Builder.create(ks1, "ValuesWithQuotes").build(), -superCFMD(ks1, "Super1", LongType.instance), -superCFMD(ks1, "Super2", LongType.instance), -superCFMD(ks1, "Super3", LongType.instance), -superCFMD(ks1, "Super4", UTF8Type.instance), -superCFMD(ks1, "Super5", bytes), -superCFMD(ks1, "Super6", LexicalUUIDType.instance, UTF8Type.instance), -keysIndexCFMD(ks1, "Indexed1", true), -keysIndexCFMD(ks1, "Indexed2", false), -//CFMetaData.Builder.create(ks1, "StandardInteger1").withColumnNameComparator(IntegerType.instance).build(), -//CFMetaData.Builder.create(ks1, "StandardLong3").withColumnNameComparator(IntegerType.instance).build(), -//CFMetaData.Builder.create(ks1, "Counter1", false, false, true).build(), -//CFMetaData.Builder.create(ks1, "SuperCounter1", false, false, true, true).build(), -superCFMD(ks1, "SuperDirectGC", BytesType.instance).gcGraceSeconds(0), -//jdbcCFMD(ks1, "JdbcInteger", IntegerType.instance).addColumnDefinition(integerColumn(ks1, "JdbcInteger")), -jdbcCFMD(ks1, "JdbcUtf8", UTF8Type.instance).addColumnDefinition(utf8Column(ks1, "JdbcUtf8")), -jdbcCFMD(ks1, "JdbcLong", LongType.instance), -jdbcCFMD(ks1, "JdbcBytes", bytes), -jdbcCFMD(ks1, "JdbcAscii", AsciiType.instance), -//CFMetaData.Builder.create(ks1, "StandardComposite", false, true, false).withColumnNameComparator(composite).build(), -//CFMetaData.Builder.create(ks1, "StandardComposite2", false, true, false).withColumnNameComparator(compositeMaxMin).build(), -//CFMetaData.Builder.create(ks1, "StandardDynamicComposite", false, true, false).withColumnNameComparator(dynamicComposite).build(), -standardCFMD(ks1, "StandardLeveled").compaction(CompactionParams.lcs(leveledOptions)), -standardCFMD(ks1, "legacyleveled").compaction(CompactionParams.lcs(leveledOptions)), +standardCFMD(ks1, "Standard1").compaction(CompactionParams.scts(compactionOptions)).build(), +standardCFMD(ks1, "Standard2").build(), +standardCFMD(ks1, "Standard3").build(), +standardCFMD(ks1, "Standard4").build(), +standardCFMD(ks1, "StandardGCGS0").gcGraceSeconds(0).build(), +standardCFMD(ks1, "StandardLong1").build(), +standardCFMD(ks1, "StandardLong2").build(), +superCFMD(ks1, "Super1", LongType.instance).build(), +superCFMD(ks1, "Super2", LongType.instance).build(), +superCFMD(ks1, "Super3", LongType.instance).build(), +superCFMD(ks1, "Super4", UTF8Type.instance).build(), +superCFMD(ks1, "Super5", bytes).build(), +superCFMD(ks1, "Super6", LexicalUUIDType.instance, UTF8Type.instance).build(), +keysIndexCFMD(ks1, "Indexed1", true).build(), +keysIndexCFMD(ks1, "Indexed2", false).build(), +superCFMD(ks1,
[12/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/config/CFMetaDataTest.java -- diff --git a/test/unit/org/apache/cassandra/config/CFMetaDataTest.java b/test/unit/org/apache/cassandra/config/CFMetaDataTest.java deleted file mode 100644 index b1249a6..000 --- a/test/unit/org/apache/cassandra/config/CFMetaDataTest.java +++ /dev/null @@ -1,193 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - *http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.apache.cassandra.config; - -import java.util.*; - -import org.apache.cassandra.SchemaLoader; -import org.apache.cassandra.cql3.QueryProcessor; -import org.apache.cassandra.cql3.UntypedResultSet; -import org.apache.cassandra.db.ColumnFamilyStore; -import org.apache.cassandra.db.Keyspace; -import org.apache.cassandra.db.Mutation; -import org.apache.cassandra.db.marshal.*; -import org.apache.cassandra.db.partitions.PartitionUpdate; -import org.apache.cassandra.db.rows.UnfilteredRowIterators; -import org.apache.cassandra.exceptions.ConfigurationException; -import org.apache.cassandra.schema.*; -import org.apache.cassandra.utils.ByteBufferUtil; -import org.apache.cassandra.utils.FBUtilities; - -import org.junit.BeforeClass; -import org.junit.Test; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; - -public class CFMetaDataTest -{ -private static final String KEYSPACE1 = "CFMetaDataTest1"; -private static final String CF_STANDARD1 = "Standard1"; - -@BeforeClass -public static void defineSchema() throws ConfigurationException -{ -SchemaLoader.prepareServer(); -SchemaLoader.createKeyspace(KEYSPACE1, -KeyspaceParams.simple(1), -SchemaLoader.standardCFMD(KEYSPACE1, CF_STANDARD1)); -} - -@Test -public void testConversionsInverses() throws Exception -{ -for (String keyspaceName : Schema.instance.getNonSystemKeyspaces()) -{ -for (ColumnFamilyStore cfs : Keyspace.open(keyspaceName).getColumnFamilyStores()) -{ -CFMetaData cfm = cfs.metadata; -checkInverses(cfm); - -// Testing with compression to catch #3558 -CFMetaData withCompression = cfm.copy(); -withCompression.compression(CompressionParams.snappy(32768)); -checkInverses(withCompression); -} -} -} - -private void checkInverses(CFMetaData cfm) throws Exception -{ -KeyspaceMetadata keyspace = Schema.instance.getKSMetaData(cfm.ksName); - -// Test schema conversion -Mutation rm = SchemaKeyspace.makeCreateTableMutation(keyspace, cfm, FBUtilities.timestampMicros()).build(); -PartitionUpdate cfU = rm.getPartitionUpdate(Schema.instance.getId(SchemaConstants.SCHEMA_KEYSPACE_NAME, SchemaKeyspace.TABLES)); -PartitionUpdate cdU = rm.getPartitionUpdate(Schema.instance.getId(SchemaConstants.SCHEMA_KEYSPACE_NAME, SchemaKeyspace.COLUMNS)); - -UntypedResultSet.Row tableRow = QueryProcessor.resultify(String.format("SELECT * FROM %s.%s", SchemaConstants.SCHEMA_KEYSPACE_NAME, SchemaKeyspace.TABLES), - UnfilteredRowIterators.filter(cfU.unfilteredIterator(), FBUtilities.nowInSeconds())) - .one(); -TableParams params = SchemaKeyspace.createTableParamsFromRow(tableRow); - -UntypedResultSet columnsRows = QueryProcessor.resultify(String.format("SELECT * FROM %s.%s", SchemaConstants.SCHEMA_KEYSPACE_NAME, SchemaKeyspace.COLUMNS), - UnfilteredRowIterators.filter(cdU.unfilteredIterator(), FBUtilities.nowInSeconds())); -Set columns = new HashSet<>(); -for (UntypedResultSet.Row row : columnsRows) -columns.add(SchemaKeyspace.createColumnFromRow(row, Types.none())); - -assertEquals(cfm.params, params); -assertEquals(new HashSet<>(cfm.allColumns()),
[04/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java -- diff --git a/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java b/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java index deb7747..4eba4d8 100644 --- a/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java +++ b/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java @@ -17,7 +17,6 @@ */ package org.apache.cassandra.index.sasi; -import java.io.File; import java.io.FileWriter; import java.io.Writer; import java.nio.ByteBuffer; @@ -33,13 +32,12 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import org.apache.cassandra.SchemaLoader; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.schema.ColumnMetadata; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.cql3.*; import org.apache.cassandra.cql3.Term; import org.apache.cassandra.cql3.statements.IndexTarget; -import org.apache.cassandra.cql3.statements.SelectStatement; import org.apache.cassandra.db.*; import org.apache.cassandra.db.filter.ColumnFilter; import org.apache.cassandra.db.filter.DataLimits; @@ -61,16 +59,10 @@ import org.apache.cassandra.index.sasi.memory.IndexMemtable; import org.apache.cassandra.index.sasi.plan.QueryController; import org.apache.cassandra.index.sasi.plan.QueryPlan; import org.apache.cassandra.io.sstable.SSTable; -import org.apache.cassandra.io.sstable.format.big.BigFormat; import org.apache.cassandra.schema.IndexMetadata; -import org.apache.cassandra.schema.KeyspaceMetadata; import org.apache.cassandra.schema.KeyspaceParams; -import org.apache.cassandra.schema.Tables; import org.apache.cassandra.serializers.MarshalException; import org.apache.cassandra.serializers.TypeSerializer; -import org.apache.cassandra.service.MigrationManager; -import org.apache.cassandra.service.QueryState; -import org.apache.cassandra.transport.messages.ResultMessage; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; import org.apache.cassandra.utils.Pair; @@ -102,13 +94,13 @@ public class SASIIndexTest public static void loadSchema() throws ConfigurationException { SchemaLoader.loadSchema(); -MigrationManager.announceNewKeyspace(KeyspaceMetadata.create(KS_NAME, - KeyspaceParams.simpleTransient(1), - Tables.of(SchemaLoader.sasiCFMD(KS_NAME, CF_NAME), - SchemaLoader.clusteringSASICFMD(KS_NAME, CLUSTERING_CF_NAME_1), - SchemaLoader.clusteringSASICFMD(KS_NAME, CLUSTERING_CF_NAME_2, "location"), - SchemaLoader.staticSASICFMD(KS_NAME, STATIC_CF_NAME), - SchemaLoader.fullTextSearchSASICFMD(KS_NAME, FTS_CF_NAME; +SchemaLoader.createKeyspace(KS_NAME, +KeyspaceParams.simpleTransient(1), +SchemaLoader.sasiCFMD(KS_NAME, CF_NAME), +SchemaLoader.clusteringSASICFMD(KS_NAME, CLUSTERING_CF_NAME_1), +SchemaLoader.clusteringSASICFMD(KS_NAME, CLUSTERING_CF_NAME_2, "location"), +SchemaLoader.staticSASICFMD(KS_NAME, STATIC_CF_NAME), + SchemaLoader.fullTextSearchSASICFMD(KS_NAME, FTS_CF_NAME)); } @Before @@ -771,25 +763,25 @@ public class SASIIndexTest ColumnFamilyStore store = Keyspace.open(KS_NAME).getColumnFamilyStore(CF_NAME); Mutation rm1 = new Mutation(KS_NAME, decoratedKey(AsciiType.instance.decompose("key1"))); -rm1.add(PartitionUpdate.singleRowUpdate(store.metadata, +rm1.add(PartitionUpdate.singleRowUpdate(store.metadata(), rm1.key(), - buildRow(buildCell(store.metadata, + buildRow(buildCell(store.metadata(), UTF8Type.instance.decompose("/data/output/id"), AsciiType.instance.decompose("jason"), System.currentTimeMillis(); Mutation rm2 = new
[08/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/db/RangeTombstoneTest.java -- diff --git a/test/unit/org/apache/cassandra/db/RangeTombstoneTest.java b/test/unit/org/apache/cassandra/db/RangeTombstoneTest.java index 9120546..4f9b12f 100644 --- a/test/unit/org/apache/cassandra/db/RangeTombstoneTest.java +++ b/test/unit/org/apache/cassandra/db/RangeTombstoneTest.java @@ -31,7 +31,7 @@ import org.junit.Test; import org.apache.cassandra.SchemaLoader; import org.apache.cassandra.UpdateBuilder; import org.apache.cassandra.Util; -import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.schema.ColumnMetadata; import org.apache.cassandra.cql3.statements.IndexTarget; import org.apache.cassandra.db.compaction.CompactionManager; import org.apache.cassandra.db.filter.ColumnFilter; @@ -45,9 +45,12 @@ import org.apache.cassandra.io.sstable.format.SSTableReader; import org.apache.cassandra.io.sstable.metadata.StatsMetadata; import org.apache.cassandra.schema.IndexMetadata; import org.apache.cassandra.schema.KeyspaceParams; +import org.apache.cassandra.schema.TableMetadata; +import org.apache.cassandra.schema.MigrationManager; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; +import static org.apache.cassandra.SchemaLoader.standardCFMD; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; @@ -63,12 +66,7 @@ public class RangeTombstoneTest SchemaLoader.prepareServer(); SchemaLoader.createKeyspace(KSNAME, KeyspaceParams.simple(1), -SchemaLoader.standardCFMD(KSNAME, - CFNAME, - 1, - UTF8Type.instance, - Int32Type.instance, - Int32Type.instance)); +standardCFMD(KSNAME, CFNAME, 1, UTF8Type.instance, Int32Type.instance, Int32Type.instance)); } @Test @@ -82,20 +80,20 @@ public class RangeTombstoneTest UpdateBuilder builder; -builder = UpdateBuilder.create(cfs.metadata, key).withTimestamp(0); +builder = UpdateBuilder.create(cfs.metadata(), key).withTimestamp(0); for (int i = 0; i < 40; i += 2) builder.newRow(i).add("val", i); builder.applyUnsafe(); cfs.forceBlockingFlush(); -new RowUpdateBuilder(cfs.metadata, 1, key).addRangeTombstone(10, 22).build().applyUnsafe(); +new RowUpdateBuilder(cfs.metadata(), 1, key).addRangeTombstone(10, 22).build().applyUnsafe(); -builder = UpdateBuilder.create(cfs.metadata, key).withTimestamp(2); +builder = UpdateBuilder.create(cfs.metadata(), key).withTimestamp(2); for (int i = 1; i < 40; i += 2) builder.newRow(i).add("val", i); builder.applyUnsafe(); -new RowUpdateBuilder(cfs.metadata, 3, key).addRangeTombstone(19, 27).build().applyUnsafe(); +new RowUpdateBuilder(cfs.metadata(), 3, key).addRangeTombstone(19, 27).build().applyUnsafe(); // We don't flush to test with both a range tomsbtone in memtable and in sstable // Queries by name @@ -135,14 +133,14 @@ public class RangeTombstoneTest // Inserting data String key = "k111"; -UpdateBuilder builder = UpdateBuilder.create(cfs.metadata, key).withTimestamp(0); +UpdateBuilder builder = UpdateBuilder.create(cfs.metadata(), key).withTimestamp(0); for (int i = 0; i < 40; i += 2) builder.newRow(i).add("val", i); builder.applyUnsafe(); -new RowUpdateBuilder(cfs.metadata, 1, key).addRangeTombstone(5, 10).build().applyUnsafe(); +new RowUpdateBuilder(cfs.metadata(), 1, key).addRangeTombstone(5, 10).build().applyUnsafe(); -new RowUpdateBuilder(cfs.metadata, 2, key).addRangeTombstone(15, 20).build().applyUnsafe(); +new RowUpdateBuilder(cfs.metadata(), 2, key).addRangeTombstone(15, 20).build().applyUnsafe(); ImmutableBTreePartition partition; @@ -210,14 +208,14 @@ public class RangeTombstoneTest sb.add(ClusteringBound.create(cfs.getComparator(), true, true, 1), ClusteringBound.create(cfs.getComparator(), false, true, 10)); sb.add(ClusteringBound.create(cfs.getComparator(), true, true, 16), ClusteringBound.create(cfs.getComparator(), false, true, 20)); -partition = Util.getOnlyPartitionUnfiltered(SinglePartitionReadCommand.create(cfs.metadata, FBUtilities.nowInSeconds(), Util.dk(key), sb.build())); +partition =
[07/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/db/SecondaryIndexTest.java -- diff --git a/test/unit/org/apache/cassandra/db/SecondaryIndexTest.java b/test/unit/org/apache/cassandra/db/SecondaryIndexTest.java index a037d90..8341e30 100644 --- a/test/unit/org/apache/cassandra/db/SecondaryIndexTest.java +++ b/test/unit/org/apache/cassandra/db/SecondaryIndexTest.java @@ -30,7 +30,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.apache.cassandra.SchemaLoader; import org.apache.cassandra.Util; -import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.schema.ColumnMetadata; import org.apache.cassandra.cql3.Operator; import org.apache.cassandra.cql3.statements.IndexTarget; import org.apache.cassandra.db.marshal.AbstractType; @@ -40,6 +40,8 @@ import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.index.Index; import org.apache.cassandra.schema.IndexMetadata; import org.apache.cassandra.schema.KeyspaceParams; +import org.apache.cassandra.schema.TableMetadata; +import org.apache.cassandra.schema.MigrationManager; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; @@ -81,10 +83,10 @@ public class SecondaryIndexTest { ColumnFamilyStore cfs = Keyspace.open(KEYSPACE1).getColumnFamilyStore(WITH_COMPOSITE_INDEX); -new RowUpdateBuilder(cfs.metadata, 0, "k1").clustering("c").add("birthdate", 1L).add("notbirthdate", 1L).build().applyUnsafe(); -new RowUpdateBuilder(cfs.metadata, 0, "k2").clustering("c").add("birthdate", 2L).add("notbirthdate", 2L).build().applyUnsafe(); -new RowUpdateBuilder(cfs.metadata, 0, "k3").clustering("c").add("birthdate", 1L).add("notbirthdate", 2L).build().applyUnsafe(); -new RowUpdateBuilder(cfs.metadata, 0, "k4").clustering("c").add("birthdate", 3L).add("notbirthdate", 2L).build().applyUnsafe(); +new RowUpdateBuilder(cfs.metadata(), 0, "k1").clustering("c").add("birthdate", 1L).add("notbirthdate", 1L).build().applyUnsafe(); +new RowUpdateBuilder(cfs.metadata(), 0, "k2").clustering("c").add("birthdate", 2L).add("notbirthdate", 2L).build().applyUnsafe(); +new RowUpdateBuilder(cfs.metadata(), 0, "k3").clustering("c").add("birthdate", 1L).add("notbirthdate", 2L).build().applyUnsafe(); +new RowUpdateBuilder(cfs.metadata(), 0, "k4").clustering("c").add("birthdate", 3L).add("notbirthdate", 2L).build().applyUnsafe(); // basic single-expression query List partitions = Util.getAll(Util.cmd(cfs).fromKeyExcl("k1").toKeyIncl("k3").columns("birthdate").build()); @@ -157,7 +159,7 @@ public class SecondaryIndexTest for (int i = 0; i < 100; i++) { -new RowUpdateBuilder(cfs.metadata, FBUtilities.timestampMicros(), "key" + i) +new RowUpdateBuilder(cfs.metadata(), FBUtilities.timestampMicros(), "key" + i) .clustering("c") .add("birthdate", 34L) .add("notbirthdate", ByteBufferUtil.bytes((long) (i % 2))) @@ -189,15 +191,15 @@ public class SecondaryIndexTest { ColumnFamilyStore cfs = Keyspace.open(KEYSPACE1).getColumnFamilyStore(WITH_COMPOSITE_INDEX); ByteBuffer bBB = ByteBufferUtil.bytes("birthdate"); -ColumnDefinition bDef = cfs.metadata.getColumnDefinition(bBB); +ColumnMetadata bDef = cfs.metadata().getColumn(bBB); ByteBuffer col = ByteBufferUtil.bytes("birthdate"); // Confirm addition works -new RowUpdateBuilder(cfs.metadata, 0, "k1").clustering("c").add("birthdate", 1L).build().applyUnsafe(); +new RowUpdateBuilder(cfs.metadata(), 0, "k1").clustering("c").add("birthdate", 1L).build().applyUnsafe(); assertIndexedOne(cfs, col, 1L); // delete the column directly -RowUpdateBuilder.deleteRow(cfs.metadata, 1, "k1", "c").applyUnsafe(); +RowUpdateBuilder.deleteRow(cfs.metadata(), 1, "k1", "c").applyUnsafe(); assertIndexedNone(cfs, col, 1L); // verify that it's not being indexed under any other value either @@ -205,26 +207,26 @@ public class SecondaryIndexTest assertNull(cfs.indexManager.getBestIndexFor(rc)); // resurrect w/ a newer timestamp -new RowUpdateBuilder(cfs.metadata, 2, "k1").clustering("c").add("birthdate", 1L).build().apply();; +new RowUpdateBuilder(cfs.metadata(), 2, "k1").clustering("c").add("birthdate", 1L).build().apply();; assertIndexedOne(cfs, col, 1L); // verify that row and delete w/ older timestamp does nothing -RowUpdateBuilder.deleteRow(cfs.metadata, 1, "k1", "c").applyUnsafe(); +RowUpdateBuilder.deleteRow(cfs.metadata(), 1, "k1", "c").applyUnsafe(); assertIndexedOne(cfs, col, 1L); // similarly, column
[01/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
Repository: cassandra Updated Branches: refs/heads/trunk 3580f6c05 -> af3fe39dc http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/triggers/TriggersTest.java -- diff --git a/test/unit/org/apache/cassandra/triggers/TriggersTest.java b/test/unit/org/apache/cassandra/triggers/TriggersTest.java index 37638c9..52cebc9 100644 --- a/test/unit/org/apache/cassandra/triggers/TriggersTest.java +++ b/test/unit/org/apache/cassandra/triggers/TriggersTest.java @@ -17,17 +17,15 @@ */ package org.apache.cassandra.triggers; -import java.net.InetAddress; import java.util.Collection; import java.util.Collections; -import org.junit.AfterClass; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; import org.apache.cassandra.SchemaLoader; -import org.apache.cassandra.config.Schema; +import org.apache.cassandra.schema.Schema; import org.apache.cassandra.cql3.QueryProcessor; import org.apache.cassandra.cql3.UntypedResultSet; import org.apache.cassandra.db.*; @@ -39,7 +37,6 @@ import org.apache.cassandra.exceptions.RequestExecutionException; import org.apache.cassandra.service.StorageService; import org.apache.cassandra.utils.FBUtilities; -import static org.apache.cassandra.utils.ByteBufferUtil.bytes; import static org.apache.cassandra.utils.ByteBufferUtil.toInt; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; @@ -230,7 +227,7 @@ public class TriggersTest public Collection augment(Partition partition) { -RowUpdateBuilder update = new RowUpdateBuilder(Schema.instance.getCFMetaData(ksName, otherCf), FBUtilities.timestampMicros(), partition.partitionKey().getKey()); +RowUpdateBuilder update = new RowUpdateBuilder(Schema.instance.getTableMetadata(ksName, otherCf), FBUtilities.timestampMicros(), partition.partitionKey().getKey()); update.add("v2", 999); return Collections.singletonList(update.build()); http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/tools/stress/src/org/apache/cassandra/io/sstable/StressCQLSSTableWriter.java -- diff --git a/tools/stress/src/org/apache/cassandra/io/sstable/StressCQLSSTableWriter.java b/tools/stress/src/org/apache/cassandra/io/sstable/StressCQLSSTableWriter.java index 41a0d6f..89fd5f9 100644 --- a/tools/stress/src/org/apache/cassandra/io/sstable/StressCQLSSTableWriter.java +++ b/tools/stress/src/org/apache/cassandra/io/sstable/StressCQLSSTableWriter.java @@ -24,12 +24,15 @@ import java.nio.ByteBuffer; import java.util.*; import java.util.stream.Collectors; +import org.apache.commons.lang3.ArrayUtils; + import com.datastax.driver.core.ProtocolVersion; import com.datastax.driver.core.TypeCodec; import org.antlr.runtime.RecognitionException; -import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.schema.TableId; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.config.DatabaseDescriptor; -import org.apache.cassandra.config.Schema; +import org.apache.cassandra.schema.Schema; import org.apache.cassandra.cql3.CQLFragmentParser; import org.apache.cassandra.cql3.ColumnSpecification; import org.apache.cassandra.cql3.CqlParser; @@ -51,6 +54,7 @@ import org.apache.cassandra.exceptions.SyntaxException; import org.apache.cassandra.io.sstable.format.SSTableFormat; import org.apache.cassandra.schema.KeyspaceMetadata; import org.apache.cassandra.schema.KeyspaceParams; +import org.apache.cassandra.schema.TableMetadataRef; import org.apache.cassandra.schema.Types; import org.apache.cassandra.service.ClientState; import org.apache.cassandra.utils.ByteBufferUtil; @@ -246,7 +250,7 @@ public class StressCQLSSTableWriter implements Closeable long now = System.currentTimeMillis() * 1000; // Note that we asks indexes to not validate values (the last 'false' arg below) because that triggers a 'Keyspace.open' // and that forces a lot of initialization that we don't want. -UpdateParameters params = new UpdateParameters(insert.cfm, +UpdateParameters params = new UpdateParameters(insert.metadata(), insert.updatedColumns(), options, insert.getTimestamp(now, options), @@ -307,7 +311,7 @@ public class StressCQLSSTableWriter implements Closeable */ public com.datastax.driver.core.UserType getUDType(String dataType) { -KeyspaceMetadata ksm = Schema.instance.getKSMetaData(insert.keyspace()); +KeyspaceMetadata ksm = Schema.instance.getKeyspaceMetadata(insert.keyspace()); UserType userType = ksm.types.getNullable(ByteBufferUtil.bytes(dataType));
[05/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/db/partition/PartitionUpdateTest.java -- diff --git a/test/unit/org/apache/cassandra/db/partition/PartitionUpdateTest.java b/test/unit/org/apache/cassandra/db/partition/PartitionUpdateTest.java index bfa9796..fdd34f1 100644 --- a/test/unit/org/apache/cassandra/db/partition/PartitionUpdateTest.java +++ b/test/unit/org/apache/cassandra/db/partition/PartitionUpdateTest.java @@ -18,7 +18,7 @@ package org.apache.cassandra.db.partition; import org.apache.cassandra.UpdateBuilder; -import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.cql3.CQLTester; import org.apache.cassandra.db.RowUpdateBuilder; import org.apache.cassandra.db.partitions.PartitionUpdate; @@ -33,7 +33,7 @@ public class PartitionUpdateTest extends CQLTester public void testOperationCount() { createTable("CREATE TABLE %s (key text, clustering int, a int, s int static, PRIMARY KEY(key, clustering))"); -CFMetaData cfm = currentTableMetadata(); +TableMetadata cfm = currentTableMetadata(); UpdateBuilder builder = UpdateBuilder.create(cfm, "key0"); Assert.assertEquals(0, builder.build().operationCount()); @@ -52,7 +52,7 @@ public class PartitionUpdateTest extends CQLTester public void testOperationCountWithCompactTable() { createTable("CREATE TABLE %s (key text PRIMARY KEY, a int) WITH COMPACT STORAGE"); -CFMetaData cfm = currentTableMetadata(); +TableMetadata cfm = currentTableMetadata(); PartitionUpdate update = new RowUpdateBuilder(cfm, FBUtilities.timestampMicros(), "key0").add("a", 1) .buildUpdate(); http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/db/rows/RowAndDeletionMergeIteratorTest.java -- diff --git a/test/unit/org/apache/cassandra/db/rows/RowAndDeletionMergeIteratorTest.java b/test/unit/org/apache/cassandra/db/rows/RowAndDeletionMergeIteratorTest.java index e4c04fb..fb35ead 100644 --- a/test/unit/org/apache/cassandra/db/rows/RowAndDeletionMergeIteratorTest.java +++ b/test/unit/org/apache/cassandra/db/rows/RowAndDeletionMergeIteratorTest.java @@ -29,18 +29,18 @@ import org.junit.BeforeClass; import org.junit.Test; import org.apache.cassandra.config.DatabaseDescriptor; +import org.apache.cassandra.schema.ColumnMetadata; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.db.ClusteringPrefix; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.db.*; import org.apache.cassandra.db.filter.ColumnFilter; import org.apache.cassandra.db.marshal.AbstractType; -import org.apache.cassandra.config.ColumnDefinition; import org.apache.cassandra.cql3.ColumnIdentifier; import org.apache.cassandra.db.partitions.PartitionUpdate; import org.apache.cassandra.db.marshal.Int32Type; import org.apache.cassandra.SchemaLoader; import org.apache.cassandra.Util; -import org.apache.cassandra.config.CFMetaData; import org.apache.cassandra.db.marshal.AsciiType; import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.schema.KeyspaceParams; @@ -55,23 +55,22 @@ public class RowAndDeletionMergeIteratorTest private int nowInSeconds; private DecoratedKey dk; private ColumnFamilyStore cfs; -private CFMetaData cfm; -private ColumnDefinition defA; +private TableMetadata cfm; +private ColumnMetadata defA; @BeforeClass public static void defineSchema() throws ConfigurationException { DatabaseDescriptor.daemonInitialization(); -CFMetaData cfMetadata = CFMetaData.Builder.create(KEYSPACE1, CF_STANDARD1) - .addPartitionKey("key", AsciiType.instance) - .addClusteringColumn("col1", Int32Type.instance) - .addRegularColumn("a", Int32Type.instance) - .build(); -SchemaLoader.prepareServer(); -SchemaLoader.createKeyspace(KEYSPACE1, -KeyspaceParams.simple(1), -cfMetadata); +TableMetadata.Builder builder = +TableMetadata.builder(KEYSPACE1, CF_STANDARD1) + .addPartitionKeyColumn("key", AsciiType.instance) + .addClusteringColumn("col1", Int32Type.instance) + .addRegularColumn("a", Int32Type.instance); + +SchemaLoader.prepareServer(); +SchemaLoader.createKeyspace(KEYSPACE1,
[03/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/io/sstable/SSTableScannerTest.java -- diff --git a/test/unit/org/apache/cassandra/io/sstable/SSTableScannerTest.java b/test/unit/org/apache/cassandra/io/sstable/SSTableScannerTest.java index 930db50..353b1ad 100644 --- a/test/unit/org/apache/cassandra/io/sstable/SSTableScannerTest.java +++ b/test/unit/org/apache/cassandra/io/sstable/SSTableScannerTest.java @@ -24,14 +24,13 @@ import java.util.Collection; import java.util.List; import com.google.common.collect.Iterables; -import com.google.common.util.concurrent.RateLimiter; import org.junit.BeforeClass; import org.junit.Test; import org.apache.cassandra.SchemaLoader; import org.apache.cassandra.Util; -import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.schema.TableMetadata; import org.apache.cassandra.db.ColumnFamilyStore; import org.apache.cassandra.db.DataRange; import org.apache.cassandra.db.DecoratedKey; @@ -74,7 +73,7 @@ public class SSTableScannerTest } // we produce all DataRange variations that produce an inclusive start and exclusive end range -private static Iterable dataRanges(CFMetaData metadata, int start, int end) +private static Iterable dataRanges(TableMetadata metadata, int start, int end) { if (end < start) return dataRanges(metadata, start, end, false, true); @@ -85,7 +84,7 @@ public class SSTableScannerTest ); } -private static Iterable dataRanges(CFMetaData metadata, int start, int end, boolean inclusiveStart, boolean inclusiveEnd) +private static Iterable dataRanges(TableMetadata metadata, int start, int end, boolean inclusiveStart, boolean inclusiveEnd) { List ranges = new ArrayList<>(); if (start == end + 1) @@ -143,7 +142,7 @@ public class SSTableScannerTest return token(key).maxKeyBound(); } -private static DataRange dataRange(CFMetaData metadata, PartitionPosition start, boolean startInclusive, PartitionPosition end, boolean endInclusive) +private static DataRange dataRange(TableMetadata metadata, PartitionPosition start, boolean startInclusive, PartitionPosition end, boolean endInclusive) { Slices.Builder sb = new Slices.Builder(metadata.comparator); ClusteringIndexSliceFilter filter = new ClusteringIndexSliceFilter(sb.build(), false); @@ -165,7 +164,7 @@ public class SSTableScannerTest return ranges; } -private static void insertRowWithKey(CFMetaData metadata, int key) +private static void insertRowWithKey(TableMetadata metadata, int key) { long timestamp = System.currentTimeMillis(); @@ -180,9 +179,9 @@ public class SSTableScannerTest private static void assertScanMatches(SSTableReader sstable, int scanStart, int scanEnd, int ... boundaries) { assert boundaries.length % 2 == 0; -for (DataRange range : dataRanges(sstable.metadata, scanStart, scanEnd)) +for (DataRange range : dataRanges(sstable.metadata(), scanStart, scanEnd)) { -try(ISSTableScanner scanner = sstable.getScanner(ColumnFilter.all(sstable.metadata), range)) +try(ISSTableScanner scanner = sstable.getScanner(ColumnFilter.all(sstable.metadata()), range)) { for (int b = 0; b < boundaries.length; b += 2) for (int i = boundaries[b]; i <= boundaries[b + 1]; i++) @@ -212,7 +211,7 @@ public class SSTableScannerTest store.disableAutoCompaction(); for (int i = 2; i < 10; i++) -insertRowWithKey(store.metadata, i); +insertRowWithKey(store.metadata(), i); store.forceBlockingFlush(); assertEquals(1, store.getLiveSSTables().size()); @@ -318,7 +317,7 @@ public class SSTableScannerTest for (int i = 0; i < 3; i++) for (int j = 2; j < 10; j++) -insertRowWithKey(store.metadata, i * 100 + j); +insertRowWithKey(store.metadata(), i * 100 + j); store.forceBlockingFlush(); assertEquals(1, store.getLiveSSTables().size()); @@ -438,7 +437,7 @@ public class SSTableScannerTest // disable compaction while flushing store.disableAutoCompaction(); -insertRowWithKey(store.metadata, 205); +insertRowWithKey(store.metadata(), 205); store.forceBlockingFlush(); assertEquals(1, store.getLiveSSTables().size()); http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/io/sstable/SSTableUtils.java -- diff --git a/test/unit/org/apache/cassandra/io/sstable/SSTableUtils.java b/test/unit/org/apache/cassandra/io/sstable/SSTableUtils.java index 90b1857..189782c 100644 ---
[02/37] cassandra git commit: Make TableMetadata immutable, optimize Schema
http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3fe39d/test/unit/org/apache/cassandra/schema/SchemaKeyspaceTest.java -- diff --git a/test/unit/org/apache/cassandra/schema/SchemaKeyspaceTest.java b/test/unit/org/apache/cassandra/schema/SchemaKeyspaceTest.java index 2fd7b06..d4c4bb4 100644 --- a/test/unit/org/apache/cassandra/schema/SchemaKeyspaceTest.java +++ b/test/unit/org/apache/cassandra/schema/SchemaKeyspaceTest.java @@ -20,10 +20,8 @@ package org.apache.cassandra.schema; import java.io.IOException; import java.nio.ByteBuffer; -import java.util.ArrayList; import java.util.Collections; import java.util.HashSet; -import java.util.List; import java.util.Set; import com.google.common.collect.ImmutableMap; @@ -32,21 +30,15 @@ import org.junit.BeforeClass; import org.junit.Test; import org.apache.cassandra.SchemaLoader; -import org.apache.cassandra.config.CFMetaData; -import org.apache.cassandra.config.ColumnDefinition; -import org.apache.cassandra.config.Schema; -import org.apache.cassandra.config.SchemaConstants; import org.apache.cassandra.cql3.QueryProcessor; import org.apache.cassandra.cql3.UntypedResultSet; +import org.apache.cassandra.cql3.statements.CreateTableStatement; import org.apache.cassandra.db.ColumnFamilyStore; import org.apache.cassandra.db.Keyspace; import org.apache.cassandra.db.Mutation; -import org.apache.cassandra.db.marshal.AsciiType; -import org.apache.cassandra.db.marshal.UTF8Type; import org.apache.cassandra.db.partitions.PartitionUpdate; import org.apache.cassandra.db.rows.UnfilteredRowIterators; import org.apache.cassandra.exceptions.ConfigurationException; -import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; import static org.junit.Assert.assertEquals; @@ -73,12 +65,10 @@ public class SchemaKeyspaceTest { for (ColumnFamilyStore cfs : Keyspace.open(keyspaceName).getColumnFamilyStores()) { -CFMetaData cfm = cfs.metadata; -checkInverses(cfm); +checkInverses(cfs.metadata()); // Testing with compression to catch #3558 -CFMetaData withCompression = cfm.copy(); -withCompression.compression(CompressionParams.snappy(32768)); +TableMetadata withCompression = cfs.metadata().unbuild().compression(CompressionParams.snappy(32768)).build(); checkInverses(withCompression); } } @@ -91,44 +81,44 @@ public class SchemaKeyspaceTest createTable(keyspace, "CREATE TABLE test (a text primary key, b int, c int)"); -CFMetaData metadata = Schema.instance.getCFMetaData(keyspace, "test"); +TableMetadata metadata = Schema.instance.getTableMetadata(keyspace, "test"); assertTrue("extensions should be empty", metadata.params.extensions.isEmpty()); ImmutableMapextensions = ImmutableMap.of("From ... with Love", ByteBuffer.wrap(new byte[]{0, 0, 7})); -CFMetaData copy = metadata.copy().extensions(extensions); +TableMetadata copy = metadata.unbuild().extensions(extensions).build(); updateTable(keyspace, metadata, copy); -metadata = Schema.instance.getCFMetaData(keyspace, "test"); +metadata = Schema.instance.getTableMetadata(keyspace, "test"); assertEquals(extensions, metadata.params.extensions); } -private static void updateTable(String keyspace, CFMetaData oldTable, CFMetaData newTable) +private static void updateTable(String keyspace, TableMetadata oldTable, TableMetadata newTable) { KeyspaceMetadata ksm = Schema.instance.getKeyspaceInstance(keyspace).getMetadata(); Mutation mutation = SchemaKeyspace.makeUpdateTableMutation(ksm, oldTable, newTable, FBUtilities.timestampMicros()).build(); -SchemaKeyspace.mergeSchema(Collections.singleton(mutation)); +Schema.instance.merge(Collections.singleton(mutation)); } private static void createTable(String keyspace, String cql) { -CFMetaData table = CFMetaData.compile(cql, keyspace); +TableMetadata table = CreateTableStatement.parse(cql, keyspace).build(); KeyspaceMetadata ksm = KeyspaceMetadata.create(keyspace, KeyspaceParams.simple(1), Tables.of(table)); Mutation mutation = SchemaKeyspace.makeCreateTableMutation(ksm, table, FBUtilities.timestampMicros()).build(); -SchemaKeyspace.mergeSchema(Collections.singleton(mutation)); +Schema.instance.merge(Collections.singleton(mutation)); } -private static void checkInverses(CFMetaData cfm) throws Exception +private static void checkInverses(TableMetadata metadata) throws Exception { -KeyspaceMetadata keyspace =
[jira] [Commented] (CASSANDRA-9989) Optimise BTree.Buider
[ https://issues.apache.org/jira/browse/CASSANDRA-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843558#comment-15843558 ] Jay Zhuang commented on CASSANDRA-9989: --- Can I take this one? I guess it could be done by implementing bulkloading https://en.wikipedia.org/wiki/B-tree#Initial_construction instead of adding value one by one https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/btree/TreeBuilder.java#L135 Is that the right direction? [~benedict] Sorry to ping you again, it's appreciated if you could give me more information. > Optimise BTree.Buider > - > > Key: CASSANDRA-9989 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9989 > Project: Cassandra > Issue Type: Sub-task >Reporter: Benedict >Priority: Minor > Fix For: 3.x > > > BTree.Builder could reduce its copying, and exploit toArray more efficiently, > with some work. It's not very important right now because we don't make as > much use of its bulk-add methods as we otherwise might, however over time > this work will become more useful. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13163) NPE in StorageService.excise
[ https://issues.apache.org/jira/browse/CASSANDRA-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13163: --- Description: {code} private void excise(Collection tokens, InetAddress endpoint) { logger.info("Removing tokens {} for {}", tokens, endpoint); if (tokenMetadata.isMember(endpoint)) HintsService.instance.excise(tokenMetadata.getHostId(endpoint)); {code} The check for TMD.isMember() is not enough to guarantee that TMD.getHostId() will not return null. If HintsService.excise() is called with null you get an NPE in a map lookup. was: {code} private void excise(Collection tokens, InetAddress endpoint) { logger.info("Removing tokens {} for {}", tokens, endpoint); if (tokenMetadata.isMember(endpoint)) HintsService.instance.excise(tokenMetadata.getHostId(endpoint)); {code} The check for TMD.isMember() is not enough to guarantee that TMD.getHodtId() will not return null. If HintsService.excise() is called with null you get an NPE in a map lookup. > NPE in StorageService.excise > > > Key: CASSANDRA-13163 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13163 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg > Fix For: 3.0.x, 3.x, 4.x > > > {code} > private void excise(Collection tokens, InetAddress endpoint) > { > logger.info("Removing tokens {} for {}", tokens, endpoint); > if (tokenMetadata.isMember(endpoint)) > HintsService.instance.excise(tokenMetadata.getHostId(endpoint)); > {code} > The check for TMD.isMember() is not enough to guarantee that TMD.getHostId() > will not return null. If HintsService.excise() is called with null you get an > NPE in a map lookup. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12513) IOException (No such file or directory) closing MessagingService's server socket (locally)
[ https://issues.apache.org/jira/browse/CASSANDRA-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-12513: --- Assignee: Robert Stupp (was: Ariel Weisberg) > IOException (No such file or directory) closing MessagingService's server > socket (locally) > -- > > Key: CASSANDRA-12513 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12513 > Project: Cassandra > Issue Type: Bug >Reporter: Robert Stupp >Assignee: Robert Stupp >Priority: Minor > > _Sometimes_ the {{RemoveTest}} fails with the following exception. It's not > related to the test itself. > The exception is risen in > {{ServerSocketChannelImpl.implCloseSelectableChannel}} where it checks that a > thread ID is non-zero. The {{thread}} instance field is set inside its accept > and poll methods. It looks like this is caused by some race condition - i.e. > stopping in debugger at certain points prevents it from being triggered. > I could not find any misuse in the code base - but want to document this > issue. > No difference between 8u92 and 8u102 > {code} > INFO [ACCEPT-/127.0.0.1] 2016-08-22 08:35:16,606 ?:? - MessagingService has > terminated the accept() thread > java.io.IOError: java.io.IOException: No such file or directory > at > org.apache.cassandra.net.MessagingService.shutdown(MessagingService.java:914) > at org.apache.cassandra.service.RemoveTest.tearDown(RemoveTest.java:103) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) > at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) > at org.junit.runners.ParentRunner.run(ParentRunner.java:220) > at org.junit.runner.JUnitCore.run(JUnitCore.java:159) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:117) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:42) > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:262) > at > com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:84) > Caused by: java.io.IOException: No such file or directory > at sun.nio.ch.NativeThread.signal(Native Method) > at > sun.nio.ch.ServerSocketChannelImpl.implCloseSelectableChannel(ServerSocketChannelImpl.java:292) > at > java.nio.channels.spi.AbstractSelectableChannel.implCloseChannel(AbstractSelectableChannel.java:234) > at > java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:115) > at sun.nio.ch.ServerSocketAdaptor.close(ServerSocketAdaptor.java:137) > at > org.apache.cassandra.net.MessagingService$SocketThread.close(MessagingService.java:1249) > at > org.apache.cassandra.net.MessagingService.shutdown(MessagingService.java:904) > ... 22 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13090) Coalescing strategy sleep
[ https://issues.apache.org/jira/browse/CASSANDRA-13090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13090: --- Summary: Coalescing strategy sleep (was: Coalescing strategy sleep too much and should be enabled by default) > Coalescing strategy sleep > - > > Key: CASSANDRA-13090 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13090 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Corentin Chary > Fix For: 3.x > > Attachments: 0001-Fix-wait-time-coalescing-CASSANDRA-13090-2.patch, > 0001-Fix-wait-time-coalescing-CASSANDRA-13090.patch > > > With the current code maybeSleep is called even if we managed to take > maxItems out of the backlog. In this case we should really avoid sleeping > because it means that backlog is building up. > I'll send a patch shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-13163) NPE in StorageService.excise
Ariel Weisberg created CASSANDRA-13163: -- Summary: NPE in StorageService.excise Key: CASSANDRA-13163 URL: https://issues.apache.org/jira/browse/CASSANDRA-13163 Project: Cassandra Issue Type: Bug Components: Core Reporter: Ariel Weisberg Assignee: Ariel Weisberg Fix For: 3.0.x, 3.x, 4.x {code} private void excise(Collection tokens, InetAddress endpoint) { logger.info("Removing tokens {} for {}", tokens, endpoint); if (tokenMetadata.isMember(endpoint)) HintsService.instance.excise(tokenMetadata.getHostId(endpoint)); {code} The check for TMD.isMember() is not enough to guarantee that TMD.getHodtId() will not return null. If HintsService.excise() is called with null you get an NPE in a map lookup. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (CASSANDRA-12981) Refactor ColumnCondition
[ https://issues.apache.org/jira/browse/CASSANDRA-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler reopened CASSANDRA-12981: new test testInMarkerWithUDTs fails with: {noformat} Error Message java.lang.Integer cannot be cast to java.lang.String Stacktrace java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String at org.apache.cassandra.cql3.CQLTester.userType(CQLTester.java:1481) at org.apache.cassandra.cql3.validation.operations.InsertUpdateIfConditionTest.testInMarkerWithUDTs(InsertUpdateIfConditionTest.java:1972) {noformat} http://cassci.datastax.com/job/cassandra-3.11_utest/71/testReport/org.apache.cassandra.cql3.validation.operations/InsertUpdateIfConditionTest/testInMarkerWithUDTs/ > Refactor ColumnCondition > > > Key: CASSANDRA-12981 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12981 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > > {{ColumnCondition}} has become really difficult to understand and modify. We > should separate the logic to make improvements and maintenance easier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13162) Batchlog replay is throttled during bootstrap, creating conditions for incorrect query results on materialized views
[ https://issues.apache.org/jira/browse/CASSANDRA-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng updated CASSANDRA-13162: - Labels: bootstrap materializedviews (was: ) > Batchlog replay is throttled during bootstrap, creating conditions for > incorrect query results on materialized views > > > Key: CASSANDRA-13162 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13162 > Project: Cassandra > Issue Type: Bug >Reporter: Wei Deng >Priority: Critical > Labels: bootstrap, materializedviews > > I've tested this in a C* 3.0 cluster with a couple of Materialized Views > defined (one base table and two MVs on that base table). The data volume is > not very high per node (about 80GB of data per node total, and that > particular base table has about 25GB of data uncompressed with one MV taking > 18GB compressed and the other MV taking 3GB), and the cluster is using decent > hardware (EC2 C4.8XL with 18 cores + 60GB RAM + 18K IOPS RAID0 from two 3TB > gp2 EBS volumes). > This is originally a 9-node cluster. It appears that after adding 3 more > nodes to the DC, the system.batches table accumulated a lot of data on the 3 > new nodes, and in the subsequent week the batchlog on the 3 new nodes got > slowly replayed back to the rest of the nodes in the cluster. The bottleneck > seems to be the throttling defined in this cassandra.yaml setting: > batchlog_replay_throttle_in_kb, which by default is set to 1MB/s. > Given that it is taking almost a week (and still hasn't finished) for the > batchlog (from MV) to be replayed after the boostrap finishes, it seems only > reasonable to unthrottle (or at least give it a much higher throttle rate) > during the initial bootstrap, and hence I'd consider this a bug for our > current MV implementation. > Also as far as I understand, the bootstrap logic won't wait for the > backlogged batchlog to be fully replayed before changing the new > bootstrapping node to "UN" state, and if batchlog for the MVs got stuck in > this state for a long time, we basically will get wrong answers on the MVs > during that whole duration (until batchlog is fully played to the cluster), > which adds even more criticality to this bug. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13162) Batchlog replay is throttled during bootstrap, creating conditions for incorrect query results on materialized views
[ https://issues.apache.org/jira/browse/CASSANDRA-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng updated CASSANDRA-13162: - Priority: Critical (was: Major) > Batchlog replay is throttled during bootstrap, creating conditions for > incorrect query results on materialized views > > > Key: CASSANDRA-13162 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13162 > Project: Cassandra > Issue Type: Bug >Reporter: Wei Deng >Priority: Critical > > I've tested this in a C* 3.0 cluster with a couple of Materialized Views > defined (one base table and two MVs on that base table). The data volume is > not very high per node (about 80GB of data per node total, and that > particular base table has about 25GB of data uncompressed with one MV taking > 18GB compressed and the other MV taking 3GB), and the cluster is using decent > hardware (EC2 C4.8XL with 18 cores + 60GB RAM + 18K IOPS RAID0 from two 3TB > gp2 EBS volumes). > This is originally a 9-node cluster. It appears that after adding 3 more > nodes to the DC, the system.batches table accumulated a lot of data on the 3 > new nodes, and in the subsequent week the batchlog on the 3 new nodes got > slowly replayed back to the rest of the nodes in the cluster. The bottleneck > seems to be the throttling defined in this cassandra.yaml setting: > batchlog_replay_throttle_in_kb, which by default is set to 1MB/s. > Given that it is taking almost a week (and still hasn't finished) for the > batchlog (from MV) to be replayed after the boostrap finishes, it seems only > reasonable to unthrottle (or at least give it a much higher throttle rate) > during the initial bootstrap, and hence I'd consider this a bug for our > current MV implementation. > Also as far as I understand, the bootstrap logic won't wait for the > backlogged batchlog to be fully replayed before changing the new > bootstrapping node to "UN" state, and if batchlog for the MVs got stuck in > this state for a long time, we basically will get wrong answers on the MVs > during that whole duration (until batchlog is fully played to the cluster), > which adds even more criticality to this bug. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-13162) Batchlog replay is throttled during bootstrap
Wei Deng created CASSANDRA-13162: Summary: Batchlog replay is throttled during bootstrap Key: CASSANDRA-13162 URL: https://issues.apache.org/jira/browse/CASSANDRA-13162 Project: Cassandra Issue Type: Bug Reporter: Wei Deng I've tested this in a C* 3.0 cluster with a couple of Materialized Views defined (one base table and two MVs on that base table). The data volume is not very high per node (about 80GB of data per node total, and that particular base table has about 25GB of data uncompressed with one MV taking 18GB compressed and the other MV taking 3GB), and the cluster is using decent hardware (EC2 C4.8XL with 18 cores + 60GB RAM + 18K IOPS RAID0 from two 3TB gp2 EBS volumes). This is originally a 9-node cluster. It appears that after adding 3 more nodes to the DC, the system.batches table accumulated a lot of data on the 3 new nodes, and in the subsequent week the batchlog on the 3 new nodes got slowly replayed back to the rest of the nodes in the cluster. The bottleneck seems to be the throttling defined in this cassandra.yaml setting: batchlog_replay_throttle_in_kb, which by default is set to 1MB/s. Given that it is taking almost a week (and still hasn't finished) for the batchlog (from MV) to be replayed after the boostrap finishes, it seems only reasonable to unthrottle (or at least give it a much higher throttle rate) during the initial bootstrap, and hence I'd consider this a bug for our current MV implementation. Also as far as I understand, the bootstrap logic won't wait for the backlogged batchlog to be fully replayed before changing the new bootstrapping node to "UN" state, and if batchlog for the MVs got stuck in this state for a long time, we basically will get wrong answers on the MVs during that whole duration (until batchlog is fully played to the cluster), which adds even more criticality to this bug. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[cassandra] Git Push Summary
Repository: cassandra Updated Tags: refs/tags/3.10-tentative [deleted] 9c2ab2555
[jira] [Updated] (CASSANDRA-13162) Batchlog replay is throttled during bootstrap, creating conditions for incorrect query results on materialized views
[ https://issues.apache.org/jira/browse/CASSANDRA-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng updated CASSANDRA-13162: - Summary: Batchlog replay is throttled during bootstrap, creating conditions for incorrect query results on materialized views (was: Batchlog replay is throttled during bootstrap) > Batchlog replay is throttled during bootstrap, creating conditions for > incorrect query results on materialized views > > > Key: CASSANDRA-13162 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13162 > Project: Cassandra > Issue Type: Bug >Reporter: Wei Deng > > I've tested this in a C* 3.0 cluster with a couple of Materialized Views > defined (one base table and two MVs on that base table). The data volume is > not very high per node (about 80GB of data per node total, and that > particular base table has about 25GB of data uncompressed with one MV taking > 18GB compressed and the other MV taking 3GB), and the cluster is using decent > hardware (EC2 C4.8XL with 18 cores + 60GB RAM + 18K IOPS RAID0 from two 3TB > gp2 EBS volumes). > This is originally a 9-node cluster. It appears that after adding 3 more > nodes to the DC, the system.batches table accumulated a lot of data on the 3 > new nodes, and in the subsequent week the batchlog on the 3 new nodes got > slowly replayed back to the rest of the nodes in the cluster. The bottleneck > seems to be the throttling defined in this cassandra.yaml setting: > batchlog_replay_throttle_in_kb, which by default is set to 1MB/s. > Given that it is taking almost a week (and still hasn't finished) for the > batchlog (from MV) to be replayed after the boostrap finishes, it seems only > reasonable to unthrottle (or at least give it a much higher throttle rate) > during the initial bootstrap, and hence I'd consider this a bug for our > current MV implementation. > Also as far as I understand, the bootstrap logic won't wait for the > backlogged batchlog to be fully replayed before changing the new > bootstrapping node to "UN" state, and if batchlog for the MVs got stuck in > this state for a long time, we basically will get wrong answers on the MVs > during that whole duration (until batchlog is fully played to the cluster), > which adds even more criticality to this bug. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13013) Potential regression from CASSANDRA-6377
[ https://issues.apache.org/jira/browse/CASSANDRA-13013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-13013: Fix Version/s: 3.10 > Potential regression from CASSANDRA-6377 > > > Key: CASSANDRA-13013 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13013 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sylvain Lebresne >Assignee: Benjamin Lerer > Fix For: 3.10 > > > As noted by [~thobbs] in CASSANDRA-12768, in 3.0 (and prior) [we return > static > results|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java#L753-L754] > for a partition if it is the only results when the query is a 2ndary index > query. This doesn't seem to happen anymore in 3.X and that was removed by > CASSANDRA-6377 (see the [merge > commit|https://github.com/apache/cassandra/commit/8c83c8edab4f1c23c382bb0ac076cab44d5f4dda#diff-75ebe654dcf6c8c474f787abaf47bb68L705], > but that removal is actually part of the original [trunk patch for > CASSANDRA-6377|https://github.com/blerer/cassandra/commit/e22a311a6d379a9f81668b7995501962ba705380#diff-75ebe654dcf6c8c474f787abaf47bb68L705]). > The removal looks intentional but it's unclear to [~thobbs] and myself why > it's not a potentially breaking change, and even if it's a legit change, why > it was done in 3.X (then trunk) but not 3.0? > [~blerer], can you enlighten us? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13013) Potential regression from CASSANDRA-6377
[ https://issues.apache.org/jira/browse/CASSANDRA-13013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-13013: --- Resolution: Fixed Status: Resolved (was: Patch Available) Committed into 3.11 at 77f0f68313a4c36bf86363e71f4775e36ccf85bc and merged into trunk > Potential regression from CASSANDRA-6377 > > > Key: CASSANDRA-13013 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13013 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sylvain Lebresne >Assignee: Benjamin Lerer > > As noted by [~thobbs] in CASSANDRA-12768, in 3.0 (and prior) [we return > static > results|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java#L753-L754] > for a partition if it is the only results when the query is a 2ndary index > query. This doesn't seem to happen anymore in 3.X and that was removed by > CASSANDRA-6377 (see the [merge > commit|https://github.com/apache/cassandra/commit/8c83c8edab4f1c23c382bb0ac076cab44d5f4dda#diff-75ebe654dcf6c8c474f787abaf47bb68L705], > but that removal is actually part of the original [trunk patch for > CASSANDRA-6377|https://github.com/blerer/cassandra/commit/e22a311a6d379a9f81668b7995501962ba705380#diff-75ebe654dcf6c8c474f787abaf47bb68L705]). > The removal looks intentional but it's unclear to [~thobbs] and myself why > it's not a potentially breaking change, and even if it's a legit change, why > it was done in 3.X (then trunk) but not 3.0? > [~blerer], can you enlighten us? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13013) Potential regression from CASSANDRA-6377
[ https://issues.apache.org/jira/browse/CASSANDRA-13013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-13013: --- Status: Patch Available (was: Open) > Potential regression from CASSANDRA-6377 > > > Key: CASSANDRA-13013 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13013 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sylvain Lebresne >Assignee: Benjamin Lerer > > As noted by [~thobbs] in CASSANDRA-12768, in 3.0 (and prior) [we return > static > results|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java#L753-L754] > for a partition if it is the only results when the query is a 2ndary index > query. This doesn't seem to happen anymore in 3.X and that was removed by > CASSANDRA-6377 (see the [merge > commit|https://github.com/apache/cassandra/commit/8c83c8edab4f1c23c382bb0ac076cab44d5f4dda#diff-75ebe654dcf6c8c474f787abaf47bb68L705], > but that removal is actually part of the original [trunk patch for > CASSANDRA-6377|https://github.com/blerer/cassandra/commit/e22a311a6d379a9f81668b7995501962ba705380#diff-75ebe654dcf6c8c474f787abaf47bb68L705]). > The removal looks intentional but it's unclear to [~thobbs] and myself why > it's not a potentially breaking change, and even if it's a legit change, why > it was done in 3.X (then trunk) but not 3.0? > [~blerer], can you enlighten us? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13025) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x.static_columns_with_distinct_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-13025: -- Fix Version/s: (was: 3.0.x) 3.11 > dtest failure in > upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x.static_columns_with_distinct_test > > > Key: CASSANDRA-13025 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13025 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Sean McCarthy >Assignee: Sylvain Lebresne >Priority: Critical > Labels: dtest, test-failure > Fix For: 3.10, 3.11 > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_dtest_upgrade/28/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x/static_columns_with_distinct_test > {code} > Error Message > > {code}{code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/tools/decorators.py", line 46, in > wrapped > f(obj) > File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line > 4010, in static_columns_with_distinct_test > rows = list(cursor.execute("SELECT DISTINCT k, s1 FROM test2")) > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1998, in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state).result() > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 3784, in result > raise self._final_exception > {code}{code} > Standard Output > http://git-wip-us.apache.org/repos/asf/cassandra.git > git:7eac22dd41cb09e6d64fb5ac48b2cca3c8840cc8 > Unexpected error in node2 log, error: > ERROR [Native-Transport-Requests-2] 2016-12-08 03:20:04,861 Message.java:617 > - Unexpected exception during request; channel = [id: 0xf4c13f2c, > L:/127.0.0.2:9042 - R:/127.0.0.1:52112] > java.io.IOError: java.io.IOException: Corrupt empty row found in unfiltered > partition > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:224) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:212) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.processPartition(SelectStatement.java:779) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:741) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:408) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:273) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:232) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:188) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:219) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:204) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:513) > [apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:407) > [apache-cassandra-3.9.jar:3.9] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35) >
[jira] [Comment Edited] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`
[ https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843131#comment-15843131 ] Cristian P edited comment on CASSANDRA-11583 at 1/27/17 5:09 PM: - I think I hit the same problem (for apache cassandra 2.1.13). I have the exact same error. What I noticed is that if I avoid the -i flag (ignore hosts) it is working as expected. ERROR 17:02:14 [Stream #fdaa4e80-e4b1-11e6-aae3-8b496c707234] Streaming error occurred java.lang.AssertionError: null at org.apache.cassandra.io.sstable.SSTableLoader.releaseReferences(SSTableLoader.java:208) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.io.sstable.SSTableLoader.onSuccess(SSTableLoader.java:193) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.io.sstable.SSTableLoader.onSuccess(SSTableLoader.java:48) ~[apache-cassandra-2.1.13.jar:2.1.13] at com.google.common.util.concurrent.Futures$4.run(Futures.java:1181) ~[guava-16.0.jar:na] at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.jar:na] at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156) ~[guava-16.0.jar:na] at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145) ~[guava-16.0.jar:na] at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:185) ~[guava-16.0.jar:na] at org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:213) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:184) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:415) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.StreamSession.complete(StreamSession.java:607) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:471) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:256) ~[apache-cassandra-2.1.13.jar:2.1.13] was (Author: cmposto): I think I hit the same problem. I have the exact same error. What I noticed is that if I avoid the -i flag (ignore hosts) it is working as expected. ERROR 17:02:14 [Stream #fdaa4e80-e4b1-11e6-aae3-8b496c707234] Streaming error occurred java.lang.AssertionError: null at org.apache.cassandra.io.sstable.SSTableLoader.releaseReferences(SSTableLoader.java:208) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.io.sstable.SSTableLoader.onSuccess(SSTableLoader.java:193) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.io.sstable.SSTableLoader.onSuccess(SSTableLoader.java:48) ~[apache-cassandra-2.1.13.jar:2.1.13] at com.google.common.util.concurrent.Futures$4.run(Futures.java:1181) ~[guava-16.0.jar:na] at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.jar:na] at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156) ~[guava-16.0.jar:na] at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145) ~[guava-16.0.jar:na] at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:185) ~[guava-16.0.jar:na] at org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:213) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:184) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:415) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.StreamSession.complete(StreamSession.java:607) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:471) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:256) ~[apache-cassandra-2.1.13.jar:2.1.13] > Exception when streaming sstables using `sstableloader` > --- > > Key: CASSANDRA-11583 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11583 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: $ uname -a > Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC
[jira] [Commented] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`
[ https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843131#comment-15843131 ] Cristian P commented on CASSANDRA-11583: I think I hit the same problem. I have the exact same error. What I noticed is that if I avoid the -i flag (ignore hosts) it is working as expected. ERROR 17:02:14 [Stream #fdaa4e80-e4b1-11e6-aae3-8b496c707234] Streaming error occurred java.lang.AssertionError: null at org.apache.cassandra.io.sstable.SSTableLoader.releaseReferences(SSTableLoader.java:208) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.io.sstable.SSTableLoader.onSuccess(SSTableLoader.java:193) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.io.sstable.SSTableLoader.onSuccess(SSTableLoader.java:48) ~[apache-cassandra-2.1.13.jar:2.1.13] at com.google.common.util.concurrent.Futures$4.run(Futures.java:1181) ~[guava-16.0.jar:na] at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.jar:na] at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156) ~[guava-16.0.jar:na] at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145) ~[guava-16.0.jar:na] at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:185) ~[guava-16.0.jar:na] at org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:213) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:184) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:415) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.StreamSession.complete(StreamSession.java:607) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:471) ~[apache-cassandra-2.1.13.jar:2.1.13] at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:256) ~[apache-cassandra-2.1.13.jar:2.1.13] > Exception when streaming sstables using `sstableloader` > --- > > Key: CASSANDRA-11583 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11583 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: $ uname -a > Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 > x86_64 x86_64 x86_64 GNU/Linux > I am using Datastax Enterprise 4.7.8-1 which is based on 2.1.13. >Reporter: Jens Rantil > > This bug came out of CASSANDRA-11562. > I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm > running the `sstableloader` I get the following output/exception: > {noformat} > # sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX > /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2 > Established connection to initial hosts > Opening sstables and calculating sections to stream > Streaming relevant part of > /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db > > /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db > to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, > /X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, > /X.X.X.53, /X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8] > progress: [/X.X.X.113]0:0/2 0 % [/X.X.X.143]0:0/2 0 % [/X.X.X.172]0:0/2 0 > % [/X.X.X.20]0:0/2 0 % [/X.X.X.71]0:0/2 0 % [/X.X.X.122]0:0/2 0 % > [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0 % [/X.X.X.143]0:0/2 0 % > [/X.X.X.172]0:0/2 0 % [/X.X.X.20]0:1/2 1 % [/X.X.X.71]0:0/2 0 % > [/X.X.X.122]0:0/2 0 % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0 % > [/X.X.X.143]0:0/2 1 % [/X.X.X.172]0:0/2 0 % [/X.X.X.20]0:1/2 1 % > [/X.X.X.71]0:0/2 0 % [/X.X.X.122]0:0/2 0 % [/X.X.X.47]0:0/2 progress: > [/X.X.X.113]0:0/2 0 % [/X.X.X.143]0:1/2 1 % [/X.X.X.172]0:0/2 0 % > [/X.X.X.20]0:1/2 1 % [/X.X.X.71]0:0/2 0 % [/X.X.X.122]0:0/2 0 % > [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0 % [/X.X.X.143]0:1/2 1 % > [/X.X.X.172]0:0/2 0 % [/X.X.X.20]0:1/2 1 % [/X.X.X.71]0:1/2 1 % > [/X.X.X.122]0:0/2 0 % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0 % > [/X.X.X.143]0:1/2 1 % [/X.X.X.172]0:0/2 0 % [/X.X.X.20]0:1/2 1 % > [/X.X.X.71]0:1/2 1 % [/X.X.X.122]0:1/2 1 % [/X.X.X.47]0:0/2 progress: > [/X.X.X.113]0:0/2 0 % [/X.X.X.143]0:1/2 1 % [/X.X.X.172]0:0/2 0 % > [/X.X.X.20]0:1/2 1 % [/X.X.X.71]0:1/2 1 % [/X.X.X.122]0:1/2 1 % > [/X.X.X.47]0:0/2 progress:
[jira] [Created] (CASSANDRA-13161) testall failure in org.apache.cassandra.db.commitlog.CommitLogDescriptorTest.testVersions
Sean McCarthy created CASSANDRA-13161: - Summary: testall failure in org.apache.cassandra.db.commitlog.CommitLogDescriptorTest.testVersions Key: CASSANDRA-13161 URL: https://issues.apache.org/jira/browse/CASSANDRA-13161 Project: Cassandra Issue Type: Bug Components: Testing Reporter: Sean McCarthy Attachments: TEST-org.apache.cassandra.db.commitlog.CommitLogDescriptorTest.log example failure: http://cassci.datastax.com/job/trunk_testall/1374/testReport/org.apache.cassandra.db.commitlog/CommitLogDescriptorTest/testVersions {code} Error Message expected:<11> but was:<10> {code}{code} Stacktrace junit.framework.AssertionFailedError: expected:<11> but was:<10> at org.apache.cassandra.db.commitlog.CommitLogDescriptorTest.testVersions(CommitLogDescriptorTest.java:84) {code} Related Failures: http://cassci.datastax.com/job/trunk_testall/1374/testReport/org.apache.cassandra.db.commitlog/CommitLogDescriptorTest/testVersions_compression/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[1/2] cassandra git commit: Fix secondary index queries regression
Repository: cassandra Updated Branches: refs/heads/trunk e71a49e81 -> 3580f6c05 Fix secondary index queries regression patch by Benjamin Lerer; reviewed by Sylvain Lebresne for CASSANDRA-13013 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/77f0f683 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/77f0f683 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/77f0f683 Branch: refs/heads/trunk Commit: 77f0f68313a4c36bf86363e71f4775e36ccf85bc Parents: 078a841 Author: Benjamin LererAuthored: Fri Jan 27 16:06:25 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 16:06:25 2017 +0100 -- CHANGES.txt | 1 + .../cql3/restrictions/RestrictionSet.java | 23 -- .../restrictions/StatementRestrictions.java | 26 +--- .../cql3/statements/ModificationStatement.java | 8 ++-- .../cql3/statements/SelectStatement.java| 13 +- .../validation/entities/SecondaryIndexTest.java | 44 6 files changed, 100 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/77f0f683/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 6f7b5c2..a32ef2f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.10 + * Fix secondary index queries regression (CASSANDRA-13013) * Add duration type to the protocol V5 (CASSANDRA-12850) * Fix duration type validation (CASSANDRA-13143) * Fix flaky GcCompactionTest (CASSANDRA-12664) http://git-wip-us.apache.org/repos/asf/cassandra/blob/77f0f683/src/java/org/apache/cassandra/cql3/restrictions/RestrictionSet.java -- diff --git a/src/java/org/apache/cassandra/cql3/restrictions/RestrictionSet.java b/src/java/org/apache/cassandra/cql3/restrictions/RestrictionSet.java index 0876f3e..3a1bcb1 100644 --- a/src/java/org/apache/cassandra/cql3/restrictions/RestrictionSet.java +++ b/src/java/org/apache/cassandra/cql3/restrictions/RestrictionSet.java @@ -72,14 +72,14 @@ final class RestrictionSet implements Restrictions, Iterable } @Override -public final void addRowFilterTo(RowFilter filter, SecondaryIndexManager indexManager, QueryOptions options) throws InvalidRequestException +public void addRowFilterTo(RowFilter filter, SecondaryIndexManager indexManager, QueryOptions options) throws InvalidRequestException { for (Restriction restriction : restrictions.values()) restriction.addRowFilterTo(filter, indexManager, options); } @Override -public final List getColumnDefs() +public List getColumnDefs() { return new ArrayList<>(restrictions.keySet()); } @@ -92,18 +92,33 @@ final class RestrictionSet implements Restrictions, Iterable } @Override -public final boolean isEmpty() +public boolean isEmpty() { return restrictions.isEmpty(); } @Override -public final int size() +public int size() { return restrictions.size(); } /** + * Checks if one of the restrictions applies to a column of the specific kind. + * @param kind the column kind + * @return {@code true} if one of the restrictions applies to a column of the specific kind, {@code false} otherwise. + */ +public boolean hasRestrictionFor(ColumnDefinition.Kind kind) +{ +for (ColumnDefinition column : restrictions.keySet()) +{ +if (column.kind == kind) +return true; +} +return false; +} + +/** * Adds the specified restriction to this set of restrictions. * * @param restriction the restriction to add http://git-wip-us.apache.org/repos/asf/cassandra/blob/77f0f683/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java -- diff --git a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java index 6b89579..8a8ee56 100644 --- a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java +++ b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java @@ -24,6 +24,7 @@ import com.google.common.base.Joiner; import org.apache.cassandra.config.CFMetaData; import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.config.ColumnDefinition.Kind; import org.apache.cassandra.cql3.*; import org.apache.cassandra.cql3.functions.Function; import org.apache.cassandra.cql3.statements.Bound; @@ -96,6 +97,12
[2/2] cassandra git commit: Merge branch cassandra-3.11 into trunk
Merge branch cassandra-3.11 into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3580f6c0 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3580f6c0 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3580f6c0 Branch: refs/heads/trunk Commit: 3580f6c05b43929add7d5d982ffdff7a26385fda Parents: e71a49e 77f0f68 Author: Benjamin LererAuthored: Fri Jan 27 16:25:22 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 16:25:22 2017 +0100 -- CHANGES.txt | 1 + .../cql3/restrictions/RestrictionSet.java | 23 -- .../restrictions/StatementRestrictions.java | 26 +--- .../cql3/statements/ModificationStatement.java | 8 ++-- .../cql3/statements/SelectStatement.java| 17 ++-- .../validation/entities/SecondaryIndexTest.java | 44 6 files changed, 102 insertions(+), 17 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3580f6c0/CHANGES.txt -- diff --cc CHANGES.txt index d113645,a32ef2f..e695bb0 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,36 -1,5 +1,37 @@@ +4.0 + * Refactor ColumnCondition (CASSANDRA-12981) + * Parallelize streaming of different keyspaces (CASSANDRA-4663) + * Improved compactions metrics (CASSANDRA-13015) + * Speed-up start-up sequence by avoiding un-needed flushes (CASSANDRA-13031) + * Use Caffeine (W-TinyLFU) for on-heap caches (CASSANDRA-10855) + * Thrift removal (CASSANDRA-5) + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716) + * Add column definition kind to dropped columns in schema (CASSANDRA-12705) + * Add (automate) Nodetool Documentation (CASSANDRA-12672) + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736) + * Reject invalid replication settings when creating or altering a keyspace (CASSANDRA-12681) + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter (CASSANDRA-12422) + * Use new token allocation for non bootstrap case as well (CASSANDRA-13080) + * Avoid byte-array copy when key cache is disabled (CASSANDRA-13084) + * More fixes to the TokenAllocator (CASSANDRA-12990) + * Require forceful decommission if number of nodes is less than replication factor (CASSANDRA-12510) + * Allow IN restrictions on column families with collections (CASSANDRA-12654) + * Move to FastThreadLocalThread and FastThreadLocal (CASSANDRA-13034) + * nodetool stopdaemon errors out (CASSANDRA-13030) + * Log message size in trace message in OutboundTcpConnection (CASSANDRA-13028) + * Add timeUnit Days for cassandra-stress (CASSANDRA-13029) + * Add mutation size and batch metrics (CASSANDRA-12649) + * Add method to get size of endpoints to TokenMetadata (CASSANDRA-12999) + * Fix primary index calculation for SASI (CASSANDRA-12910) + * Expose time spent waiting in thread pool queue (CASSANDRA-8398) + * Conditionally update index built status to avoid unnecessary flushes (CASSANDRA-12969) + * NoReplicationTokenAllocator should work with zero replication factor (CASSANDRA-12983) + * cqlsh auto completion: refactor definition of compaction strategy options (CASSANDRA-12946) + * Add support for arithmetic operators (CASSANDRA-11935) + * Tables in system_distributed should not use gcgs of 0 (CASSANDRA-12954) + 3.10 + * Fix secondary index queries regression (CASSANDRA-13013) * Add duration type to the protocol V5 (CASSANDRA-12850) * Fix duration type validation (CASSANDRA-13143) * Fix flaky GcCompactionTest (CASSANDRA-12664) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3580f6c0/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3580f6c0/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3580f6c0/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java -- diff --cc src/java/org/apache/cassandra/cql3/statements/SelectStatement.java index 4e7323b,e8b4600..7e66dc4 --- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java @@@ -787,18 -782,6 +787,18 @@@ public class SelectStatement implement } } +// Determines whether, when we have a partition result with not rows, we still return the static content (as a +// result set row with null for all other regular columns.) +
[jira] [Commented] (CASSANDRA-12437) dtest failure in bootstrap_test.TestBootstrap.local_quorum_bootstrap_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842964#comment-15842964 ] Philip Thompson commented on CASSANDRA-12437: - Why would we not just specify the correct port for stress to connect to? > dtest failure in bootstrap_test.TestBootstrap.local_quorum_bootstrap_test > - > > Key: CASSANDRA-12437 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12437 > Project: Cassandra > Issue Type: Test >Reporter: Craig Kodman >Assignee: DS Test Eng > Labels: dtest, windows > Attachments: node1_debug.log, node1_gc.log, node1.log, > node2_debug.log, node2_gc.log, node2.log > > > example failure: > http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/281/testReport/bootstrap_test/TestBootstrap/local_quorum_bootstrap_test > {code} > Stacktrace > File "C:\tools\python2\lib\unittest\case.py", line 329, in run > testMethod() > File > "D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra-dtest\bootstrap_test.py", > line 389, in local_quorum_bootstrap_test > 'ops(insert=1)', '-rate', 'threads=50']) > File "D:\jenkins\workspace\cassandra-2.2_dtest_win32\ccm\ccmlib\node.py", > line 1244, in stress > return handle_external_tool_process(p, ['stress'] + stress_options) > File "D:\jenkins\workspace\cassandra-2.2_dtest_win32\ccm\ccmlib\node.py", > line 1955, in handle_external_tool_process > raise ToolError(cmd_args, rc, out, err) > 'Subprocess [\'stress\', \'user\', \'profile=d:temp2tmp8sf4da\', > \'n=2M\', \'no-warmup\', \'ops(insert=1)\', \'-rate\', \'threads=50\'] exited > with non-zero status; exit status: 1; \nstderr: Exception in thread "main" > java.io.IOError: java.io.FileNotFoundException: d:\\temp\\2\\tmp8sf4da (The > process cannot access the file because it is being used by another > process)\r\n\tat > org.apache.cassandra.stress.StressProfile.load(StressProfile.java:574)\r\n\tat > > org.apache.cassandra.stress.settings.SettingsCommandUser.(SettingsCommandUser.java:58)\r\n\tat > > org.apache.cassandra.stress.settings.SettingsCommandUser.build(SettingsCommandUser.java:127)\r\n\tat > > org.apache.cassandra.stress.settings.SettingsCommand.get(SettingsCommand.java:195)\r\n\tat > > org.apache.cassandra.stress.settings.StressSettings.get(StressSettings.java:249)\r\n\tat > > org.apache.cassandra.stress.settings.StressSettings.parse(StressSettings.java:220)\r\n\tat > org.apache.cassandra.stress.Stress.main(Stress.java:63)\r\nCaused by: > java.io.FileNotFoundException: d:\\temp\\2\\tmp8sf4da (The process cannot > access the file because it is being used by another process)\r\n\tat > java.io.FileInputStream.open0(Native Method)\r\n\tat > java.io.FileInputStream.open(FileInputStream.java:195)\r\n\tat > java.io.FileInputStream.(FileInputStream.java:138)\r\n\tat > java.io.FileInputStream.(FileInputStream.java:93)\r\n\tat > sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)\r\n\tat > > sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)\r\n\tat > java.net.URL.openStream(URL.java:1038)\r\n\tat > org.apache.cassandra.stress.StressProfile.load(StressProfile.java:560)\r\n\t... > 6 more\r\n\n >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > d:\\temp\\2\\dtest-wsze0r\ndtest: DEBUG: Done setting configuration > options:\n{ \'initial_token\': None,\n\'num_tokens\': \'32\',\n > \'phi_convict_threshold\': 5,\n\'start_rpc\': > \'true\'}\n- >> end captured logging << > -' > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: Fix secondary index queries regression
Repository: cassandra Updated Branches: refs/heads/cassandra-3.11 078a84154 -> 77f0f6831 Fix secondary index queries regression patch by Benjamin Lerer; reviewed by Sylvain Lebresne for CASSANDRA-13013 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/77f0f683 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/77f0f683 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/77f0f683 Branch: refs/heads/cassandra-3.11 Commit: 77f0f68313a4c36bf86363e71f4775e36ccf85bc Parents: 078a841 Author: Benjamin LererAuthored: Fri Jan 27 16:06:25 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 16:06:25 2017 +0100 -- CHANGES.txt | 1 + .../cql3/restrictions/RestrictionSet.java | 23 -- .../restrictions/StatementRestrictions.java | 26 +--- .../cql3/statements/ModificationStatement.java | 8 ++-- .../cql3/statements/SelectStatement.java| 13 +- .../validation/entities/SecondaryIndexTest.java | 44 6 files changed, 100 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/77f0f683/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 6f7b5c2..a32ef2f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.10 + * Fix secondary index queries regression (CASSANDRA-13013) * Add duration type to the protocol V5 (CASSANDRA-12850) * Fix duration type validation (CASSANDRA-13143) * Fix flaky GcCompactionTest (CASSANDRA-12664) http://git-wip-us.apache.org/repos/asf/cassandra/blob/77f0f683/src/java/org/apache/cassandra/cql3/restrictions/RestrictionSet.java -- diff --git a/src/java/org/apache/cassandra/cql3/restrictions/RestrictionSet.java b/src/java/org/apache/cassandra/cql3/restrictions/RestrictionSet.java index 0876f3e..3a1bcb1 100644 --- a/src/java/org/apache/cassandra/cql3/restrictions/RestrictionSet.java +++ b/src/java/org/apache/cassandra/cql3/restrictions/RestrictionSet.java @@ -72,14 +72,14 @@ final class RestrictionSet implements Restrictions, Iterable } @Override -public final void addRowFilterTo(RowFilter filter, SecondaryIndexManager indexManager, QueryOptions options) throws InvalidRequestException +public void addRowFilterTo(RowFilter filter, SecondaryIndexManager indexManager, QueryOptions options) throws InvalidRequestException { for (Restriction restriction : restrictions.values()) restriction.addRowFilterTo(filter, indexManager, options); } @Override -public final List getColumnDefs() +public List getColumnDefs() { return new ArrayList<>(restrictions.keySet()); } @@ -92,18 +92,33 @@ final class RestrictionSet implements Restrictions, Iterable } @Override -public final boolean isEmpty() +public boolean isEmpty() { return restrictions.isEmpty(); } @Override -public final int size() +public int size() { return restrictions.size(); } /** + * Checks if one of the restrictions applies to a column of the specific kind. + * @param kind the column kind + * @return {@code true} if one of the restrictions applies to a column of the specific kind, {@code false} otherwise. + */ +public boolean hasRestrictionFor(ColumnDefinition.Kind kind) +{ +for (ColumnDefinition column : restrictions.keySet()) +{ +if (column.kind == kind) +return true; +} +return false; +} + +/** * Adds the specified restriction to this set of restrictions. * * @param restriction the restriction to add http://git-wip-us.apache.org/repos/asf/cassandra/blob/77f0f683/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java -- diff --git a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java index 6b89579..8a8ee56 100644 --- a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java +++ b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java @@ -24,6 +24,7 @@ import com.google.common.base.Joiner; import org.apache.cassandra.config.CFMetaData; import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.config.ColumnDefinition.Kind; import org.apache.cassandra.cql3.*; import org.apache.cassandra.cql3.functions.Function; import org.apache.cassandra.cql3.statements.Bound;
[jira] [Updated] (CASSANDRA-13114) 3.0.x: update netty
[ https://issues.apache.org/jira/browse/CASSANDRA-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Podkowinski updated CASSANDRA-13114: --- Status: Patch Available (was: Open) > 3.0.x: update netty > --- > > Key: CASSANDRA-13114 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13114 > Project: Cassandra > Issue Type: Bug >Reporter: Tom van der Woerdt >Assignee: Stefan Podkowinski > Attachments: 13114_netty-4.0.43_2.x-3.0.patch, > 13114_netty-4.0.43_3.11.patch > > > https://issues.apache.org/jira/browse/CASSANDRA-12032 updated netty for > Cassandra 3.8, but this wasn't backported. Netty 4.0.23, which ships with > Cassandra 3.0.x, has some serious bugs around memory handling for SSL > connections. > It would be nice if both were updated to 4.0.42, a version released this year. > 4.0.23 makes it impossible for me to run SSL, because nodes run out of memory > every ~30 minutes. This was fixed in 4.0.27. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13114) 3.0.x: update netty
[ https://issues.apache.org/jira/browse/CASSANDRA-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Podkowinski updated CASSANDRA-13114: --- Attachment: (was: 13114_netty-4.0.43_3.11.patch) > 3.0.x: update netty > --- > > Key: CASSANDRA-13114 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13114 > Project: Cassandra > Issue Type: Bug >Reporter: Tom van der Woerdt >Assignee: Stefan Podkowinski > Attachments: 13114_netty-4.0.43_2.x-3.0.patch, > 13114_netty-4.0.43_3.11.patch > > > https://issues.apache.org/jira/browse/CASSANDRA-12032 updated netty for > Cassandra 3.8, but this wasn't backported. Netty 4.0.23, which ships with > Cassandra 3.0.x, has some serious bugs around memory handling for SSL > connections. > It would be nice if both were updated to 4.0.42, a version released this year. > 4.0.23 makes it impossible for me to run SSL, because nodes run out of memory > every ~30 minutes. This was fixed in 4.0.27. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13114) 3.0.x: update netty
[ https://issues.apache.org/jira/browse/CASSANDRA-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Podkowinski updated CASSANDRA-13114: --- Attachment: 13114_netty-4.0.43_2.x-3.0.patch 13114_netty-4.0.43_3.11.patch > 3.0.x: update netty > --- > > Key: CASSANDRA-13114 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13114 > Project: Cassandra > Issue Type: Bug >Reporter: Tom van der Woerdt >Assignee: Stefan Podkowinski > Attachments: 13114_netty-4.0.43_2.x-3.0.patch, > 13114_netty-4.0.43_3.11.patch > > > https://issues.apache.org/jira/browse/CASSANDRA-12032 updated netty for > Cassandra 3.8, but this wasn't backported. Netty 4.0.23, which ships with > Cassandra 3.0.x, has some serious bugs around memory handling for SSL > connections. > It would be nice if both were updated to 4.0.42, a version released this year. > 4.0.23 makes it impossible for me to run SSL, because nodes run out of memory > every ~30 minutes. This was fixed in 4.0.27. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-13114) 3.0.x: update netty
[ https://issues.apache.org/jira/browse/CASSANDRA-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842958#comment-15842958 ] Stefan Podkowinski commented on CASSANDRA-13114: Some insights on dtest results: * {{local_quorum_bootstrap_test}} See CASSANDRA-12437, [PR|https://github.com/riptano/cassandra-dtest/pull/1429] created * {{test_tombstone_failure_v3}} Seems to be [failing on constant basis|http://cassci.datastax.com/view/cassandra-2.1/job/cassandra-2.1_dtest/lastCompletedBuild/testReport/read_failures_test/TestReadFailures/test_tombstone_failure_v3/], I've opened a [PR| https://github.com/riptano/cassandra-dtest/pull/1430] * {{cqlsh_tests.cqlsh_tests.CqlshSmokeTest.test_alter_table}} and * {{cql_tests.MiscellaneousCQLTester.prepared_statement_invalidation_test}} Fallout from CASSANDRA-12443 and already addressed in [PR|https://github.com/riptano/cassandra-dtest/pull/1427] Other tests seem to be flaky or have known issues. > 3.0.x: update netty > --- > > Key: CASSANDRA-13114 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13114 > Project: Cassandra > Issue Type: Bug >Reporter: Tom van der Woerdt >Assignee: Stefan Podkowinski > Attachments: 13114_netty-4.0.43_3.11.patch > > > https://issues.apache.org/jira/browse/CASSANDRA-12032 updated netty for > Cassandra 3.8, but this wasn't backported. Netty 4.0.23, which ships with > Cassandra 3.0.x, has some serious bugs around memory handling for SSL > connections. > It would be nice if both were updated to 4.0.42, a version released this year. > 4.0.23 makes it impossible for me to run SSL, because nodes run out of memory > every ~30 minutes. This was fixed in 4.0.27. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13114) 3.0.x: update netty
[ https://issues.apache.org/jira/browse/CASSANDRA-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Podkowinski updated CASSANDRA-13114: --- Attachment: (was: 13114_netty-4.0.43_2.x-3.0.patch) > 3.0.x: update netty > --- > > Key: CASSANDRA-13114 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13114 > Project: Cassandra > Issue Type: Bug >Reporter: Tom van der Woerdt >Assignee: Stefan Podkowinski > Attachments: 13114_netty-4.0.43_3.11.patch > > > https://issues.apache.org/jira/browse/CASSANDRA-12032 updated netty for > Cassandra 3.8, but this wasn't backported. Netty 4.0.23, which ships with > Cassandra 3.0.x, has some serious bugs around memory handling for SSL > connections. > It would be nice if both were updated to 4.0.42, a version released this year. > 4.0.23 makes it impossible for me to run SSL, because nodes run out of memory > every ~30 minutes. This was fixed in 4.0.27. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12981) Refactor ColumnCondition
[ https://issues.apache.org/jira/browse/CASSANDRA-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-12981: --- Resolution: Fixed Status: Resolved (was: Ready to Commit) Committed the fix for the handling of nulls and unsets in IN conditions into 2.2 at 70e33d96e1f1236788afb50c1f02fbc64d760281 and merged it into 3.0, 3.11 and trunk. Committed the refactoring in trunk at e71a49e81f97864641f406461425a74ca4c56df1 > Refactor ColumnCondition > > > Key: CASSANDRA-12981 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12981 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > > {{ColumnCondition}} has become really difficult to understand and modify. We > should separate the logic to make improvements and maintenance easier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9425) Make node-local schema fully immutable
[ https://issues.apache.org/jira/browse/CASSANDRA-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842944#comment-15842944 ] Sylvain Lebresne commented on CASSANDRA-9425: - bq. With both of these present in current trunk code, I'd prefer to commit what is here as is, and move those into a separate ticket Wfm. Thanks for addressing the rest, +1, ship it (assuming clean CI, which we seem to have but I checked quickly). Great work! > Make node-local schema fully immutable > -- > > Key: CASSANDRA-9425 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9425 > Project: Cassandra > Issue Type: Sub-task >Reporter: Aleksey Yeschenko >Assignee: Aleksey Yeschenko > Fix For: 4.0 > > > The way we handle schema changes currently is inherently racy. > All of our {{SchemaAlteringStatement}} s perform validation on a schema state > that's won't necessarily be there when the statement gets executed and > mutates schema. > We should make all the *Metadata classes ({{KeyspaceMetadata, > TableMetadata}}, {{ColumnMetadata}}, immutable, and local schema persistently > snapshottable, with a single top-level {{AtomicReference}} to the current > snapshot. Have DDL statements perform validation and transformation on the > same state. > In pseudo-code, think > {code} > public interface DDLStatement > { > /** > * Validates that the DDL statement can be applied to the provided schema > snapshot. > * > * @param schema snapshot of schema before executing CREATE KEYSPACE > */ > void validate(SchemaSnapshot schema); > > /** > * Applies the DDL statement to the provided schema snapshot. > * Implies that validate() has already been called on the provided > snapshot. > * > * @param schema snapshot of schema before executing the statement > * @return snapshot of schema as it would be after executing the statement > */ > SchemaSnapshot transform(SchemaSnapshot schema); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12981) Refactor ColumnCondition
[ https://issues.apache.org/jira/browse/CASSANDRA-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842940#comment-15842940 ] Benjamin Lerer commented on CASSANDRA-12981: Thanks for the review. > Refactor ColumnCondition > > > Key: CASSANDRA-12981 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12981 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > > {{ColumnCondition}} has become really difficult to understand and modify. We > should separate the logic to make improvements and maintenance easier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[2/7] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0
Merge branch cassandra-2.2 into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/714edbce Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/714edbce Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/714edbce Branch: refs/heads/trunk Commit: 714edbce9a38c70a0f031a73d3c1a20f7f2b4bb3 Parents: e1da99a 70e33d9 Author: Benjamin LererAuthored: Fri Jan 27 15:26:18 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 15:29:40 2017 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/cql3/ColumnCondition.java | 62 ++- .../operations/InsertUpdateIfConditionTest.java | 162 ++- 3 files changed, 215 insertions(+), 10 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/714edbce/CHANGES.txt -- diff --cc CHANGES.txt index 3796a8d,c5e5335..547fc07 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,32 -1,5 +1,33 @@@ -2.2.9 +3.0.11 + * Better error when modifying function permissions without explicit keyspace (CASSANDRA-12925) + * Indexer is not correctly invoked when building indexes over sstables (CASSANDRA-13075) + * Read repair is not blocking repair to finish in foreground repair (CASSANDRA-13115) + * Stress daemon help is incorrect (CASSANDRA-12563) + * Remove ALTER TYPE support (CASSANDRA-12443) + * Fix assertion for certain legacy range tombstone pattern (CASSANDRA-12203) + * Set javac encoding to utf-8 (CASSANDRA-11077) + * Replace empty strings with null values if they cannot be converted (CASSANDRA-12794) + * Fixed flacky SSTableRewriterTest: check file counts before calling validateCFS (CASSANDRA-12348) + * Fix deserialization of 2.x DeletedCells (CASSANDRA-12620) + * Add parent repair session id to anticompaction log message (CASSANDRA-12186) + * Improve contention handling on failure to acquire MV lock for streaming and hints (CASSANDRA-12905) + * Fix DELETE and UPDATE queries with empty IN restrictions (CASSANDRA-12829) + * Mark MVs as built after successful bootstrap (CASSANDRA-12984) + * Estimated TS drop-time histogram updated with Cell.NO_DELETION_TIME (CASSANDRA-13040) + * Nodetool compactionstats fails with NullPointerException (CASSANDRA-13021) + * Thread local pools never cleaned up (CASSANDRA-13033) + * Set RPC_READY to false when draining or if a node is marked as shutdown (CASSANDRA-12781) + * Make sure sstables only get committed when it's safe to discard commit log records (CASSANDRA-12956) + * Reject default_time_to_live option when creating or altering MVs (CASSANDRA-12868) + * Nodetool should use a more sane max heap size (CASSANDRA-12739) + * LocalToken ensures token values are cloned on heap (CASSANDRA-12651) + * AnticompactionRequestSerializer serializedSize is incorrect (CASSANDRA-12934) + * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535) + * Reenable HeapPool (CASSANDRA-12900) +Merged from 2.2: + * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) + * Fix race causing infinite loop if Thrift server is stopped before it starts listening (CASSANDRA-12856) + * CompactionTasks now correctly drops sstables out of compaction when not enough disk space is available (CASSANDRA-12979) * Remove support for non-JavaScript UDFs (CASSANDRA-12883) * Fix DynamicEndpointSnitch noop in multi-datacenter situations (CASSANDRA-13074) * cqlsh copy-from: encode column names to avoid primary key parsing errors (CASSANDRA-12909) http://git-wip-us.apache.org/repos/asf/cassandra/blob/714edbce/src/java/org/apache/cassandra/cql3/ColumnCondition.java -- diff --cc src/java/org/apache/cassandra/cql3/ColumnCondition.java index b13e534,3412e71..60e67f3 --- a/src/java/org/apache/cassandra/cql3/ColumnCondition.java +++ b/src/java/org/apache/cassandra/cql3/ColumnCondition.java @@@ -23,8 -23,16 +23,9 @@@ import java.util.* import com.google.common.collect.Iterators; import org.apache.cassandra.config.ColumnDefinition; + import org.apache.cassandra.cql3.Term.Terminal; import org.apache.cassandra.cql3.functions.Function; -import org.apache.cassandra.db.Cell; -import org.apache.cassandra.db.ColumnFamily; -import org.apache.cassandra.db.composites.CellName; -import org.apache.cassandra.db.composites.CellNameType; -import org.apache.cassandra.db.composites.Composite; -import org.apache.cassandra.db.filter.ColumnSlice; +import org.apache.cassandra.db.rows.*; import org.apache.cassandra.db.marshal.*; import org.apache.cassandra.exceptions.InvalidRequestException; import org.apache.cassandra.transport.Server;
[4/7] cassandra git commit: Merge branch cassandra-3.11 into trunk
Merge branch cassandra-3.11 into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/36375f9a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/36375f9a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/36375f9a Branch: refs/heads/trunk Commit: 36375f9a08a13b658ead66634fd6a27957e8e02c Parents: 80728df 078a841 Author: Benjamin LererAuthored: Fri Jan 27 15:42:47 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 15:42:58 2017 +0100 -- --
[6/7] cassandra git commit: Refactor ColumnCondition
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e71a49e8/src/java/org/apache/cassandra/cql3/conditions/ColumnCondition.java -- diff --git a/src/java/org/apache/cassandra/cql3/conditions/ColumnCondition.java b/src/java/org/apache/cassandra/cql3/conditions/ColumnCondition.java new file mode 100644 index 000..cfd62f5 --- /dev/null +++ b/src/java/org/apache/cassandra/cql3/conditions/ColumnCondition.java @@ -0,0 +1,825 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.cql3.conditions; + +import java.nio.ByteBuffer; +import java.util.*; + +import com.google.common.collect.Iterators; + +import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.cql3.*; +import org.apache.cassandra.cql3.Term.Terminal; +import org.apache.cassandra.cql3.functions.Function; +import org.apache.cassandra.db.rows.*; +import org.apache.cassandra.db.marshal.*; +import org.apache.cassandra.transport.ProtocolVersion; +import org.apache.cassandra.utils.ByteBufferUtil; + +import static org.apache.cassandra.cql3.statements.RequestValidations.*; + +/** + * A CQL3 condition on the value of a column or collection element. For example, "UPDATE .. IF a = 0". + */ +public abstract class ColumnCondition +{ +public final ColumnDefinition column; +public final Operator operator; +private final Terms terms; + +private ColumnCondition(ColumnDefinition column, Operator op, Terms terms) +{ +this.column = column; +this.operator = op; +this.terms = terms; +} + +/** + * Adds functions for the bind variables of this operation. + * + * @param functions the list of functions to get add + */ +public void addFunctionsTo(List functions) +{ +terms.addFunctionsTo(functions); +} + +/** + * Collects the column specification for the bind variables of this operation. + * + * @param boundNames the list of column specification where to collect the + * bind variables of this term in. + */ +public void collectMarkerSpecification(VariableSpecifications boundNames) +{ +terms.collectMarkerSpecification(boundNames); +} + +public abstract ColumnCondition.Bound bind(QueryOptions options); + +protected final List bindAndGetTerms(QueryOptions options) +{ +return filterUnsetValuesIfNeeded(checkValues(terms.bindAndGet(options))); +} + +protected final List bindTerms(QueryOptions options) +{ +return filterUnsetValuesIfNeeded(checkValues(terms.bind(options))); +} + +/** + * Checks that the output of a bind operations on {@code Terms} is a valid one. + * @param values the list to check + * @return the input list + */ +private List checkValues(List values) +{ +checkFalse(values == null && operator.isIN(), "Invalid null list in IN condition"); +checkFalse(values == Terms.UNSET_LIST, "Invalid 'unset' value in condition"); +return values; +} + +private List filterUnsetValuesIfNeeded(List values) +{ +if (!operator.isIN()) +return values; + +List filtered = new ArrayList<>(values.size()); +for (int i = 0, m = values.size(); i < m; i++) +{ +T value = values.get(i); +// The value can be ByteBuffer or Constants.Value so we need to check the 2 type of UNSET +if (value != ByteBufferUtil.UNSET_BYTE_BUFFER && value != Constants.UNSET_VALUE) +filtered.add(value); +} +return filtered; +} + +/** + * Simple condition (e.g. IF v = 1). + */ +private static final class SimpleColumnCondition extends ColumnCondition +{ +public SimpleColumnCondition(ColumnDefinition column, Operator op, Terms values) +{ +super(column, op, values); +} + +public Bound bind(QueryOptions options) +{ +if (column.type.isCollection() && column.type.isMultiCell()) +return new MultiCellCollectionBound(column, operator,
[7/7] cassandra git commit: Refactor ColumnCondition
Refactor ColumnCondition patch by Benjamin Lerer; reviewed by Alex Petrov for CASSANDRA-12981 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e71a49e8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e71a49e8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e71a49e8 Branch: refs/heads/trunk Commit: e71a49e81f97864641f406461425a74ca4c56df1 Parents: 36375f9 Author: Benjamin LererAuthored: Fri Jan 27 15:50:06 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 15:50:06 2017 +0100 -- CHANGES.txt |4 +- src/antlr/Cql.g |1 + .../cassandra/cql3/AbstractConditions.java | 64 -- .../apache/cassandra/cql3/AbstractMarker.java |4 +- .../apache/cassandra/cql3/ColumnCondition.java | 1042 -- .../apache/cassandra/cql3/ColumnConditions.java | 164 --- .../org/apache/cassandra/cql3/Conditions.java | 102 -- .../cassandra/cql3/IfExistsCondition.java | 36 - .../cassandra/cql3/IfNotExistsCondition.java| 36 - .../org/apache/cassandra/cql3/Operator.java | 180 ++- src/java/org/apache/cassandra/cql3/Terms.java | 241 +++- .../cql3/conditions/AbstractConditions.java | 64 ++ .../cql3/conditions/ColumnCondition.java| 825 ++ .../cql3/conditions/ColumnConditions.java | 165 +++ .../cassandra/cql3/conditions/Conditions.java | 103 ++ .../cql3/conditions/IfExistsCondition.java | 37 + .../cql3/conditions/IfNotExistsCondition.java | 37 + .../cql3/statements/CQL3CasRequest.java |1 + .../cql3/statements/DeleteStatement.java|2 + .../cql3/statements/ModificationStatement.java |3 + .../cql3/statements/UpdateStatement.java|2 + .../cassandra/cql3/ColumnConditionTest.java | 589 -- .../cql3/conditions/ColumnConditionTest.java| 557 ++ .../operations/InsertUpdateIfConditionTest.java | 192 +++- 24 files changed, 2356 insertions(+), 2095 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e71a49e8/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index ee2196d..d113645 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,5 +1,6 @@ 4.0 - * Parallelize streaming of different keyspaces (4663) + * Refactor ColumnCondition (CASSANDRA-12981) + * Parallelize streaming of different keyspaces (CASSANDRA-4663) * Improved compactions metrics (CASSANDRA-13015) * Speed-up start-up sequence by avoiding un-needed flushes (CASSANDRA-13031) * Use Caffeine (W-TinyLFU) for on-heap caches (CASSANDRA-10855) @@ -216,6 +217,7 @@ Merged from 3.0: * Correct log message for statistics of offheap memtable flush (CASSANDRA-12776) * Explicitly set locale for string validation (CASSANDRA-12541,CASSANDRA-12542,CASSANDRA-12543,CASSANDRA-12545) Merged from 2.2: + * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) * Fix race causing infinite loop if Thrift server is stopped before it starts listening (CASSANDRA-12856) * CompactionTasks now correctly drops sstables out of compaction when not enough disk space is available (CASSANDRA-12979) * Fix DynamicEndpointSnitch noop in multi-datacenter situations (CASSANDRA-13074) http://git-wip-us.apache.org/repos/asf/cassandra/blob/e71a49e8/src/antlr/Cql.g -- diff --git a/src/antlr/Cql.g b/src/antlr/Cql.g index a11f2fd..8b26426 100644 --- a/src/antlr/Cql.g +++ b/src/antlr/Cql.g @@ -46,6 +46,7 @@ import Parser,Lexer; import org.apache.cassandra.cql3.statements.*; import org.apache.cassandra.cql3.selection.*; import org.apache.cassandra.cql3.functions.*; +import org.apache.cassandra.cql3.conditions.*; import org.apache.cassandra.db.marshal.CollectionType; import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.exceptions.InvalidRequestException; http://git-wip-us.apache.org/repos/asf/cassandra/blob/e71a49e8/src/java/org/apache/cassandra/cql3/AbstractConditions.java -- diff --git a/src/java/org/apache/cassandra/cql3/AbstractConditions.java b/src/java/org/apache/cassandra/cql3/AbstractConditions.java deleted file mode 100644 index 530d2b1..000 --- a/src/java/org/apache/cassandra/cql3/AbstractConditions.java +++ /dev/null @@ -1,64 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF
[5/7] cassandra git commit: Refactor ColumnCondition
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e71a49e8/test/unit/org/apache/cassandra/cql3/conditions/ColumnConditionTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/conditions/ColumnConditionTest.java b/test/unit/org/apache/cassandra/cql3/conditions/ColumnConditionTest.java new file mode 100644 index 000..5822027 --- /dev/null +++ b/test/unit/org/apache/cassandra/cql3/conditions/ColumnConditionTest.java @@ -0,0 +1,557 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.cql3.conditions; + +import java.nio.ByteBuffer; +import java.util.*; + +import org.junit.Assert; +import org.junit.Test; + +import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.cql3.*; +import org.apache.cassandra.cql3.Constants.Value; +import org.apache.cassandra.cql3.conditions.ColumnCondition; +import org.apache.cassandra.db.Clustering; +import org.apache.cassandra.db.marshal.Int32Type; +import org.apache.cassandra.db.marshal.ListType; +import org.apache.cassandra.db.marshal.MapType; +import org.apache.cassandra.db.marshal.SetType; +import org.apache.cassandra.db.rows.*; +import org.apache.cassandra.exceptions.InvalidRequestException; +import org.apache.cassandra.serializers.TimeUUIDSerializer; +import org.apache.cassandra.utils.ByteBufferUtil; +import org.apache.cassandra.utils.UUIDGen; + +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.fail; + +import static org.apache.cassandra.cql3.Operator.*; +import static org.apache.cassandra.utils.ByteBufferUtil.EMPTY_BYTE_BUFFER; + + +public class ColumnConditionTest +{ +public static final ByteBuffer ZERO = Int32Type.instance.fromString("0"); +public static final ByteBuffer ONE = Int32Type.instance.fromString("1"); +public static final ByteBuffer TWO = Int32Type.instance.fromString("2"); + +private static Row newRow(ColumnDefinition definition, ByteBuffer value) +{ +BufferCell cell = new BufferCell(definition, 0L, Cell.NO_TTL, Cell.NO_DELETION_TIME, value, null); +return BTreeRow.singleCellRow(Clustering.EMPTY, cell); +} + +private static Row newRow(ColumnDefinition definition, List values) +{ +Row.Builder builder = BTreeRow.sortedBuilder(); +builder.newRow(Clustering.EMPTY); +long now = System.currentTimeMillis(); +if (values != null) +{ +for (int i = 0, m = values.size(); i < m; i++) +{ +UUID uuid = UUIDGen.getTimeUUID(now, i); +ByteBuffer key = TimeUUIDSerializer.instance.serialize(uuid); +ByteBuffer value = values.get(i); +BufferCell cell = new BufferCell(definition, + 0L, + Cell.NO_TTL, + Cell.NO_DELETION_TIME, + value, + CellPath.create(key)); +builder.addCell(cell); +} +} +return builder.build(); +} + +private static Row newRow(ColumnDefinition definition, SortedSet values) +{ +Row.Builder builder = BTreeRow.sortedBuilder(); +builder.newRow(Clustering.EMPTY); +if (values != null) +{ +for (ByteBuffer value : values) +{ +BufferCell cell = new BufferCell(definition, + 0L, + Cell.NO_TTL, + Cell.NO_DELETION_TIME, + ByteBufferUtil.EMPTY_BYTE_BUFFER, + CellPath.create(value)); +builder.addCell(cell); +} +} +return builder.build(); +} + +private static Row newRow(ColumnDefinition definition, Mapvalues) +{ +Row.Builder builder = BTreeRow.sortedBuilder();
[3/7] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.11
Merge branch cassandra-3.0 into cassandra-3.11 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/078a8415 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/078a8415 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/078a8415 Branch: refs/heads/trunk Commit: 078a8415441ce965d5844cb8f246e0e61da6c397 Parents: 84d8361 714edbc Author: Benjamin LererAuthored: Fri Jan 27 15:36:15 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 15:36:15 2017 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/cql3/ColumnCondition.java | 62 ++- .../operations/InsertUpdateIfConditionTest.java | 162 ++- 3 files changed, 215 insertions(+), 10 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/078a8415/CHANGES.txt -- diff --cc CHANGES.txt index 66e17a2,547fc07..6f7b5c2 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -186,18 -80,6 +186,19 @@@ Merged from 3.0 * Correct log message for statistics of offheap memtable flush (CASSANDRA-12776) * Explicitly set locale for string validation (CASSANDRA-12541,CASSANDRA-12542,CASSANDRA-12543,CASSANDRA-12545) Merged from 2.2: ++ * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) + * Fix race causing infinite loop if Thrift server is stopped before it starts listening (CASSANDRA-12856) + * CompactionTasks now correctly drops sstables out of compaction when not enough disk space is available (CASSANDRA-12979) + * Remove support for non-JavaScript UDFs (CASSANDRA-12883) + * Fix DynamicEndpointSnitch noop in multi-datacenter situations (CASSANDRA-13074) + * cqlsh copy-from: encode column names to avoid primary key parsing errors (CASSANDRA-12909) + * Temporarily fix bug that creates commit log when running offline tools (CASSANDRA-8616) + * Reduce granuality of OpOrder.Group during index build (CASSANDRA-12796) + * Test bind parameters and unset parameters in InsertUpdateIfConditionTest (CASSANDRA-12980) + * Use saved tokens when setting local tokens on StorageService.joinRing (CASSANDRA-12935) + * cqlsh: fix DESC TYPES errors (CASSANDRA-12914) + * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899) + * Avoid blocking gossip during pending range calculation (CASSANDRA-12281) * Fix purgeability of tombstones with max timestamp (CASSANDRA-12792) * Fail repair if participant dies during sync or anticompaction (CASSANDRA-12901) * cqlsh COPY: unprotected pk values before converting them if not using prepared statements (CASSANDRA-12863) http://git-wip-us.apache.org/repos/asf/cassandra/blob/078a8415/src/java/org/apache/cassandra/cql3/ColumnCondition.java -- diff --cc src/java/org/apache/cassandra/cql3/ColumnCondition.java index 07f9f60,60e67f3..75a988e --- a/src/java/org/apache/cassandra/cql3/ColumnCondition.java +++ b/src/java/org/apache/cassandra/cql3/ColumnCondition.java @@@ -22,8 -22,8 +22,9 @@@ import java.util.* import com.google.common.collect.Iterators; +import org.apache.cassandra.config.CFMetaData; import org.apache.cassandra.config.ColumnDefinition; + import org.apache.cassandra.cql3.Term.Terminal; import org.apache.cassandra.cql3.functions.Function; import org.apache.cassandra.db.rows.*; import org.apache.cassandra.db.marshal.*; @@@ -300,10 -245,20 +301,20 @@@ public class ColumnConditio private SimpleInBound(ColumnCondition condition, QueryOptions options) throws InvalidRequestException { super(condition.column, condition.operator); -assert !(column.type instanceof CollectionType) && condition.collectionElement == null; +assert !(column.type instanceof CollectionType) && condition.field == null; assert condition.operator == Operator.IN; if (condition.inValues == null) - this.inValues = ((Lists.Value) condition.value.bind(options)).getElements(); + { + Terminal terminal = condition.value.bind(options); + + if (terminal == null) + throw new InvalidRequestException("Invalid null list in IN condition"); + + if (terminal == Constants.UNSET_VALUE) + throw new InvalidRequestException("Invalid 'unset' value in condition"); + + this.inValues = ((Lists.Value) terminal).getElements(); + } else { this.inValues = new ArrayList<>(condition.inValues.size());
[1/7] cassandra git commit: Fix handling of nulls and unsets in IN conditions
Repository: cassandra Updated Branches: refs/heads/trunk 80728df56 -> e71a49e81 Fix handling of nulls and unsets in IN conditions patch by Benjamin Lerer; reviewed by Alex Petrov for CASSANDRA-12981 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70e33d96 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70e33d96 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70e33d96 Branch: refs/heads/trunk Commit: 70e33d96e1f1236788afb50c1f02fbc64d760281 Parents: e5a5533 Author: Benjamin LererAuthored: Fri Jan 27 15:17:53 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 15:17:53 2017 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/cql3/ColumnCondition.java | 62 ++- .../operations/InsertUpdateIfConditionTest.java | 161 ++- 3 files changed, 214 insertions(+), 10 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/70e33d96/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4f769a1..c5e5335 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.9 + * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) * Remove support for non-JavaScript UDFs (CASSANDRA-12883) * Fix DynamicEndpointSnitch noop in multi-datacenter situations (CASSANDRA-13074) * cqlsh copy-from: encode column names to avoid primary key parsing errors (CASSANDRA-12909) http://git-wip-us.apache.org/repos/asf/cassandra/blob/70e33d96/src/java/org/apache/cassandra/cql3/ColumnCondition.java -- diff --git a/src/java/org/apache/cassandra/cql3/ColumnCondition.java b/src/java/org/apache/cassandra/cql3/ColumnCondition.java index c7b5ddb..3412e71 100644 --- a/src/java/org/apache/cassandra/cql3/ColumnCondition.java +++ b/src/java/org/apache/cassandra/cql3/ColumnCondition.java @@ -25,6 +25,7 @@ import com.google.common.collect.Iterables; import com.google.common.collect.Iterators; import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.cql3.Term.Terminal; import org.apache.cassandra.cql3.functions.Function; import org.apache.cassandra.db.Cell; import org.apache.cassandra.db.ColumnFamily; @@ -267,12 +268,26 @@ public class ColumnCondition assert !(column.type instanceof CollectionType) && condition.collectionElement == null; assert condition.operator == Operator.IN; if (condition.inValues == null) -this.inValues = ((Lists.Value) condition.value.bind(options)).getElements(); +{ +Terminal terminal = condition.value.bind(options); + +if (terminal == null) +throw new InvalidRequestException("Invalid null list in IN condition"); + +if (terminal == Constants.UNSET_VALUE) +throw new InvalidRequestException("Invalid 'unset' value in condition"); + +this.inValues = ((Lists.Value) terminal).getElements(); +} else { this.inValues = new ArrayList<>(condition.inValues.size()); for (Term value : condition.inValues) -this.inValues.add(value.bindAndGet(options)); +{ +ByteBuffer buffer = value.bindAndGet(options); +if (buffer != ByteBufferUtil.UNSET_BYTE_BUFFER) +this.inValues.add(value.bindAndGet(options)); +} } } @@ -378,12 +393,22 @@ public class ColumnCondition this.collectionElement = condition.collectionElement.bindAndGet(options); if (condition.inValues == null) -this.inValues = ((Lists.Value) condition.value.bind(options)).getElements(); +{ +Terminal terminal = condition.value.bind(options); +if (terminal == Constants.UNSET_VALUE) +throw new InvalidRequestException("Invalid 'unset' value in condition"); +this.inValues = ((Lists.Value) terminal).getElements(); +} else { this.inValues = new ArrayList<>(condition.inValues.size()); for (Term value : condition.inValues) -this.inValues.add(value.bindAndGet(options)); +{ +ByteBuffer buffer = value.bindAndGet(options); +// We want to ignore unset values +if (buffer != ByteBufferUtil.UNSET_BYTE_BUFFER) +this.inValues.add(buffer); +} }
[1/3] cassandra git commit: Fix handling of nulls and unsets in IN conditions
Repository: cassandra Updated Branches: refs/heads/cassandra-3.11 84d836137 -> 078a84154 Fix handling of nulls and unsets in IN conditions patch by Benjamin Lerer; reviewed by Alex Petrov for CASSANDRA-12981 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70e33d96 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70e33d96 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70e33d96 Branch: refs/heads/cassandra-3.11 Commit: 70e33d96e1f1236788afb50c1f02fbc64d760281 Parents: e5a5533 Author: Benjamin LererAuthored: Fri Jan 27 15:17:53 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 15:17:53 2017 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/cql3/ColumnCondition.java | 62 ++- .../operations/InsertUpdateIfConditionTest.java | 161 ++- 3 files changed, 214 insertions(+), 10 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/70e33d96/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4f769a1..c5e5335 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.9 + * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) * Remove support for non-JavaScript UDFs (CASSANDRA-12883) * Fix DynamicEndpointSnitch noop in multi-datacenter situations (CASSANDRA-13074) * cqlsh copy-from: encode column names to avoid primary key parsing errors (CASSANDRA-12909) http://git-wip-us.apache.org/repos/asf/cassandra/blob/70e33d96/src/java/org/apache/cassandra/cql3/ColumnCondition.java -- diff --git a/src/java/org/apache/cassandra/cql3/ColumnCondition.java b/src/java/org/apache/cassandra/cql3/ColumnCondition.java index c7b5ddb..3412e71 100644 --- a/src/java/org/apache/cassandra/cql3/ColumnCondition.java +++ b/src/java/org/apache/cassandra/cql3/ColumnCondition.java @@ -25,6 +25,7 @@ import com.google.common.collect.Iterables; import com.google.common.collect.Iterators; import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.cql3.Term.Terminal; import org.apache.cassandra.cql3.functions.Function; import org.apache.cassandra.db.Cell; import org.apache.cassandra.db.ColumnFamily; @@ -267,12 +268,26 @@ public class ColumnCondition assert !(column.type instanceof CollectionType) && condition.collectionElement == null; assert condition.operator == Operator.IN; if (condition.inValues == null) -this.inValues = ((Lists.Value) condition.value.bind(options)).getElements(); +{ +Terminal terminal = condition.value.bind(options); + +if (terminal == null) +throw new InvalidRequestException("Invalid null list in IN condition"); + +if (terminal == Constants.UNSET_VALUE) +throw new InvalidRequestException("Invalid 'unset' value in condition"); + +this.inValues = ((Lists.Value) terminal).getElements(); +} else { this.inValues = new ArrayList<>(condition.inValues.size()); for (Term value : condition.inValues) -this.inValues.add(value.bindAndGet(options)); +{ +ByteBuffer buffer = value.bindAndGet(options); +if (buffer != ByteBufferUtil.UNSET_BYTE_BUFFER) +this.inValues.add(value.bindAndGet(options)); +} } } @@ -378,12 +393,22 @@ public class ColumnCondition this.collectionElement = condition.collectionElement.bindAndGet(options); if (condition.inValues == null) -this.inValues = ((Lists.Value) condition.value.bind(options)).getElements(); +{ +Terminal terminal = condition.value.bind(options); +if (terminal == Constants.UNSET_VALUE) +throw new InvalidRequestException("Invalid 'unset' value in condition"); +this.inValues = ((Lists.Value) terminal).getElements(); +} else { this.inValues = new ArrayList<>(condition.inValues.size()); for (Term value : condition.inValues) -this.inValues.add(value.bindAndGet(options)); +{ +ByteBuffer buffer = value.bindAndGet(options); +// We want to ignore unset values +if (buffer != ByteBufferUtil.UNSET_BYTE_BUFFER) +this.inValues.add(buffer); +}
[2/3] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0
Merge branch cassandra-2.2 into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/714edbce Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/714edbce Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/714edbce Branch: refs/heads/cassandra-3.11 Commit: 714edbce9a38c70a0f031a73d3c1a20f7f2b4bb3 Parents: e1da99a 70e33d9 Author: Benjamin LererAuthored: Fri Jan 27 15:26:18 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 15:29:40 2017 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/cql3/ColumnCondition.java | 62 ++- .../operations/InsertUpdateIfConditionTest.java | 162 ++- 3 files changed, 215 insertions(+), 10 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/714edbce/CHANGES.txt -- diff --cc CHANGES.txt index 3796a8d,c5e5335..547fc07 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,32 -1,5 +1,33 @@@ -2.2.9 +3.0.11 + * Better error when modifying function permissions without explicit keyspace (CASSANDRA-12925) + * Indexer is not correctly invoked when building indexes over sstables (CASSANDRA-13075) + * Read repair is not blocking repair to finish in foreground repair (CASSANDRA-13115) + * Stress daemon help is incorrect (CASSANDRA-12563) + * Remove ALTER TYPE support (CASSANDRA-12443) + * Fix assertion for certain legacy range tombstone pattern (CASSANDRA-12203) + * Set javac encoding to utf-8 (CASSANDRA-11077) + * Replace empty strings with null values if they cannot be converted (CASSANDRA-12794) + * Fixed flacky SSTableRewriterTest: check file counts before calling validateCFS (CASSANDRA-12348) + * Fix deserialization of 2.x DeletedCells (CASSANDRA-12620) + * Add parent repair session id to anticompaction log message (CASSANDRA-12186) + * Improve contention handling on failure to acquire MV lock for streaming and hints (CASSANDRA-12905) + * Fix DELETE and UPDATE queries with empty IN restrictions (CASSANDRA-12829) + * Mark MVs as built after successful bootstrap (CASSANDRA-12984) + * Estimated TS drop-time histogram updated with Cell.NO_DELETION_TIME (CASSANDRA-13040) + * Nodetool compactionstats fails with NullPointerException (CASSANDRA-13021) + * Thread local pools never cleaned up (CASSANDRA-13033) + * Set RPC_READY to false when draining or if a node is marked as shutdown (CASSANDRA-12781) + * Make sure sstables only get committed when it's safe to discard commit log records (CASSANDRA-12956) + * Reject default_time_to_live option when creating or altering MVs (CASSANDRA-12868) + * Nodetool should use a more sane max heap size (CASSANDRA-12739) + * LocalToken ensures token values are cloned on heap (CASSANDRA-12651) + * AnticompactionRequestSerializer serializedSize is incorrect (CASSANDRA-12934) + * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535) + * Reenable HeapPool (CASSANDRA-12900) +Merged from 2.2: + * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) + * Fix race causing infinite loop if Thrift server is stopped before it starts listening (CASSANDRA-12856) + * CompactionTasks now correctly drops sstables out of compaction when not enough disk space is available (CASSANDRA-12979) * Remove support for non-JavaScript UDFs (CASSANDRA-12883) * Fix DynamicEndpointSnitch noop in multi-datacenter situations (CASSANDRA-13074) * cqlsh copy-from: encode column names to avoid primary key parsing errors (CASSANDRA-12909) http://git-wip-us.apache.org/repos/asf/cassandra/blob/714edbce/src/java/org/apache/cassandra/cql3/ColumnCondition.java -- diff --cc src/java/org/apache/cassandra/cql3/ColumnCondition.java index b13e534,3412e71..60e67f3 --- a/src/java/org/apache/cassandra/cql3/ColumnCondition.java +++ b/src/java/org/apache/cassandra/cql3/ColumnCondition.java @@@ -23,8 -23,16 +23,9 @@@ import java.util.* import com.google.common.collect.Iterators; import org.apache.cassandra.config.ColumnDefinition; + import org.apache.cassandra.cql3.Term.Terminal; import org.apache.cassandra.cql3.functions.Function; -import org.apache.cassandra.db.Cell; -import org.apache.cassandra.db.ColumnFamily; -import org.apache.cassandra.db.composites.CellName; -import org.apache.cassandra.db.composites.CellNameType; -import org.apache.cassandra.db.composites.Composite; -import org.apache.cassandra.db.filter.ColumnSlice; +import org.apache.cassandra.db.rows.*; import org.apache.cassandra.db.marshal.*; import org.apache.cassandra.exceptions.InvalidRequestException; import org.apache.cassandra.transport.Server;
[3/3] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.11
Merge branch cassandra-3.0 into cassandra-3.11 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/078a8415 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/078a8415 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/078a8415 Branch: refs/heads/cassandra-3.11 Commit: 078a8415441ce965d5844cb8f246e0e61da6c397 Parents: 84d8361 714edbc Author: Benjamin LererAuthored: Fri Jan 27 15:36:15 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 15:36:15 2017 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/cql3/ColumnCondition.java | 62 ++- .../operations/InsertUpdateIfConditionTest.java | 162 ++- 3 files changed, 215 insertions(+), 10 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/078a8415/CHANGES.txt -- diff --cc CHANGES.txt index 66e17a2,547fc07..6f7b5c2 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -186,18 -80,6 +186,19 @@@ Merged from 3.0 * Correct log message for statistics of offheap memtable flush (CASSANDRA-12776) * Explicitly set locale for string validation (CASSANDRA-12541,CASSANDRA-12542,CASSANDRA-12543,CASSANDRA-12545) Merged from 2.2: ++ * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) + * Fix race causing infinite loop if Thrift server is stopped before it starts listening (CASSANDRA-12856) + * CompactionTasks now correctly drops sstables out of compaction when not enough disk space is available (CASSANDRA-12979) + * Remove support for non-JavaScript UDFs (CASSANDRA-12883) + * Fix DynamicEndpointSnitch noop in multi-datacenter situations (CASSANDRA-13074) + * cqlsh copy-from: encode column names to avoid primary key parsing errors (CASSANDRA-12909) + * Temporarily fix bug that creates commit log when running offline tools (CASSANDRA-8616) + * Reduce granuality of OpOrder.Group during index build (CASSANDRA-12796) + * Test bind parameters and unset parameters in InsertUpdateIfConditionTest (CASSANDRA-12980) + * Use saved tokens when setting local tokens on StorageService.joinRing (CASSANDRA-12935) + * cqlsh: fix DESC TYPES errors (CASSANDRA-12914) + * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899) + * Avoid blocking gossip during pending range calculation (CASSANDRA-12281) * Fix purgeability of tombstones with max timestamp (CASSANDRA-12792) * Fail repair if participant dies during sync or anticompaction (CASSANDRA-12901) * cqlsh COPY: unprotected pk values before converting them if not using prepared statements (CASSANDRA-12863) http://git-wip-us.apache.org/repos/asf/cassandra/blob/078a8415/src/java/org/apache/cassandra/cql3/ColumnCondition.java -- diff --cc src/java/org/apache/cassandra/cql3/ColumnCondition.java index 07f9f60,60e67f3..75a988e --- a/src/java/org/apache/cassandra/cql3/ColumnCondition.java +++ b/src/java/org/apache/cassandra/cql3/ColumnCondition.java @@@ -22,8 -22,8 +22,9 @@@ import java.util.* import com.google.common.collect.Iterators; +import org.apache.cassandra.config.CFMetaData; import org.apache.cassandra.config.ColumnDefinition; + import org.apache.cassandra.cql3.Term.Terminal; import org.apache.cassandra.cql3.functions.Function; import org.apache.cassandra.db.rows.*; import org.apache.cassandra.db.marshal.*; @@@ -300,10 -245,20 +301,20 @@@ public class ColumnConditio private SimpleInBound(ColumnCondition condition, QueryOptions options) throws InvalidRequestException { super(condition.column, condition.operator); -assert !(column.type instanceof CollectionType) && condition.collectionElement == null; +assert !(column.type instanceof CollectionType) && condition.field == null; assert condition.operator == Operator.IN; if (condition.inValues == null) - this.inValues = ((Lists.Value) condition.value.bind(options)).getElements(); + { + Terminal terminal = condition.value.bind(options); + + if (terminal == null) + throw new InvalidRequestException("Invalid null list in IN condition"); + + if (terminal == Constants.UNSET_VALUE) + throw new InvalidRequestException("Invalid 'unset' value in condition"); + + this.inValues = ((Lists.Value) terminal).getElements(); + } else { this.inValues = new ArrayList<>(condition.inValues.size());
[1/2] cassandra git commit: Fix handling of nulls and unsets in IN conditions
Repository: cassandra Updated Branches: refs/heads/cassandra-3.0 e1da99a1d -> 714edbce9 Fix handling of nulls and unsets in IN conditions patch by Benjamin Lerer; reviewed by Alex Petrov for CASSANDRA-12981 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70e33d96 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70e33d96 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70e33d96 Branch: refs/heads/cassandra-3.0 Commit: 70e33d96e1f1236788afb50c1f02fbc64d760281 Parents: e5a5533 Author: Benjamin LererAuthored: Fri Jan 27 15:17:53 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 15:17:53 2017 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/cql3/ColumnCondition.java | 62 ++- .../operations/InsertUpdateIfConditionTest.java | 161 ++- 3 files changed, 214 insertions(+), 10 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/70e33d96/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4f769a1..c5e5335 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.9 + * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) * Remove support for non-JavaScript UDFs (CASSANDRA-12883) * Fix DynamicEndpointSnitch noop in multi-datacenter situations (CASSANDRA-13074) * cqlsh copy-from: encode column names to avoid primary key parsing errors (CASSANDRA-12909) http://git-wip-us.apache.org/repos/asf/cassandra/blob/70e33d96/src/java/org/apache/cassandra/cql3/ColumnCondition.java -- diff --git a/src/java/org/apache/cassandra/cql3/ColumnCondition.java b/src/java/org/apache/cassandra/cql3/ColumnCondition.java index c7b5ddb..3412e71 100644 --- a/src/java/org/apache/cassandra/cql3/ColumnCondition.java +++ b/src/java/org/apache/cassandra/cql3/ColumnCondition.java @@ -25,6 +25,7 @@ import com.google.common.collect.Iterables; import com.google.common.collect.Iterators; import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.cql3.Term.Terminal; import org.apache.cassandra.cql3.functions.Function; import org.apache.cassandra.db.Cell; import org.apache.cassandra.db.ColumnFamily; @@ -267,12 +268,26 @@ public class ColumnCondition assert !(column.type instanceof CollectionType) && condition.collectionElement == null; assert condition.operator == Operator.IN; if (condition.inValues == null) -this.inValues = ((Lists.Value) condition.value.bind(options)).getElements(); +{ +Terminal terminal = condition.value.bind(options); + +if (terminal == null) +throw new InvalidRequestException("Invalid null list in IN condition"); + +if (terminal == Constants.UNSET_VALUE) +throw new InvalidRequestException("Invalid 'unset' value in condition"); + +this.inValues = ((Lists.Value) terminal).getElements(); +} else { this.inValues = new ArrayList<>(condition.inValues.size()); for (Term value : condition.inValues) -this.inValues.add(value.bindAndGet(options)); +{ +ByteBuffer buffer = value.bindAndGet(options); +if (buffer != ByteBufferUtil.UNSET_BYTE_BUFFER) +this.inValues.add(value.bindAndGet(options)); +} } } @@ -378,12 +393,22 @@ public class ColumnCondition this.collectionElement = condition.collectionElement.bindAndGet(options); if (condition.inValues == null) -this.inValues = ((Lists.Value) condition.value.bind(options)).getElements(); +{ +Terminal terminal = condition.value.bind(options); +if (terminal == Constants.UNSET_VALUE) +throw new InvalidRequestException("Invalid 'unset' value in condition"); +this.inValues = ((Lists.Value) terminal).getElements(); +} else { this.inValues = new ArrayList<>(condition.inValues.size()); for (Term value : condition.inValues) -this.inValues.add(value.bindAndGet(options)); +{ +ByteBuffer buffer = value.bindAndGet(options); +// We want to ignore unset values +if (buffer != ByteBufferUtil.UNSET_BYTE_BUFFER) +this.inValues.add(buffer); +}
[2/2] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0
Merge branch cassandra-2.2 into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/714edbce Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/714edbce Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/714edbce Branch: refs/heads/cassandra-3.0 Commit: 714edbce9a38c70a0f031a73d3c1a20f7f2b4bb3 Parents: e1da99a 70e33d9 Author: Benjamin LererAuthored: Fri Jan 27 15:26:18 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 15:29:40 2017 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/cql3/ColumnCondition.java | 62 ++- .../operations/InsertUpdateIfConditionTest.java | 162 ++- 3 files changed, 215 insertions(+), 10 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/714edbce/CHANGES.txt -- diff --cc CHANGES.txt index 3796a8d,c5e5335..547fc07 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,32 -1,5 +1,33 @@@ -2.2.9 +3.0.11 + * Better error when modifying function permissions without explicit keyspace (CASSANDRA-12925) + * Indexer is not correctly invoked when building indexes over sstables (CASSANDRA-13075) + * Read repair is not blocking repair to finish in foreground repair (CASSANDRA-13115) + * Stress daemon help is incorrect (CASSANDRA-12563) + * Remove ALTER TYPE support (CASSANDRA-12443) + * Fix assertion for certain legacy range tombstone pattern (CASSANDRA-12203) + * Set javac encoding to utf-8 (CASSANDRA-11077) + * Replace empty strings with null values if they cannot be converted (CASSANDRA-12794) + * Fixed flacky SSTableRewriterTest: check file counts before calling validateCFS (CASSANDRA-12348) + * Fix deserialization of 2.x DeletedCells (CASSANDRA-12620) + * Add parent repair session id to anticompaction log message (CASSANDRA-12186) + * Improve contention handling on failure to acquire MV lock for streaming and hints (CASSANDRA-12905) + * Fix DELETE and UPDATE queries with empty IN restrictions (CASSANDRA-12829) + * Mark MVs as built after successful bootstrap (CASSANDRA-12984) + * Estimated TS drop-time histogram updated with Cell.NO_DELETION_TIME (CASSANDRA-13040) + * Nodetool compactionstats fails with NullPointerException (CASSANDRA-13021) + * Thread local pools never cleaned up (CASSANDRA-13033) + * Set RPC_READY to false when draining or if a node is marked as shutdown (CASSANDRA-12781) + * Make sure sstables only get committed when it's safe to discard commit log records (CASSANDRA-12956) + * Reject default_time_to_live option when creating or altering MVs (CASSANDRA-12868) + * Nodetool should use a more sane max heap size (CASSANDRA-12739) + * LocalToken ensures token values are cloned on heap (CASSANDRA-12651) + * AnticompactionRequestSerializer serializedSize is incorrect (CASSANDRA-12934) + * Prevent reloading of logback.xml from UDF sandbox (CASSANDRA-12535) + * Reenable HeapPool (CASSANDRA-12900) +Merged from 2.2: + * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) + * Fix race causing infinite loop if Thrift server is stopped before it starts listening (CASSANDRA-12856) + * CompactionTasks now correctly drops sstables out of compaction when not enough disk space is available (CASSANDRA-12979) * Remove support for non-JavaScript UDFs (CASSANDRA-12883) * Fix DynamicEndpointSnitch noop in multi-datacenter situations (CASSANDRA-13074) * cqlsh copy-from: encode column names to avoid primary key parsing errors (CASSANDRA-12909) http://git-wip-us.apache.org/repos/asf/cassandra/blob/714edbce/src/java/org/apache/cassandra/cql3/ColumnCondition.java -- diff --cc src/java/org/apache/cassandra/cql3/ColumnCondition.java index b13e534,3412e71..60e67f3 --- a/src/java/org/apache/cassandra/cql3/ColumnCondition.java +++ b/src/java/org/apache/cassandra/cql3/ColumnCondition.java @@@ -23,8 -23,16 +23,9 @@@ import java.util.* import com.google.common.collect.Iterators; import org.apache.cassandra.config.ColumnDefinition; + import org.apache.cassandra.cql3.Term.Terminal; import org.apache.cassandra.cql3.functions.Function; -import org.apache.cassandra.db.Cell; -import org.apache.cassandra.db.ColumnFamily; -import org.apache.cassandra.db.composites.CellName; -import org.apache.cassandra.db.composites.CellNameType; -import org.apache.cassandra.db.composites.Composite; -import org.apache.cassandra.db.filter.ColumnSlice; +import org.apache.cassandra.db.rows.*; import org.apache.cassandra.db.marshal.*; import org.apache.cassandra.exceptions.InvalidRequestException; import org.apache.cassandra.transport.Server;
cassandra git commit: Fix handling of nulls and unsets in IN conditions
Repository: cassandra Updated Branches: refs/heads/cassandra-2.2 e5a553339 -> 70e33d96e Fix handling of nulls and unsets in IN conditions patch by Benjamin Lerer; reviewed by Alex Petrov for CASSANDRA-12981 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70e33d96 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70e33d96 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70e33d96 Branch: refs/heads/cassandra-2.2 Commit: 70e33d96e1f1236788afb50c1f02fbc64d760281 Parents: e5a5533 Author: Benjamin LererAuthored: Fri Jan 27 15:17:53 2017 +0100 Committer: Benjamin Lerer Committed: Fri Jan 27 15:17:53 2017 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/cql3/ColumnCondition.java | 62 ++- .../operations/InsertUpdateIfConditionTest.java | 161 ++- 3 files changed, 214 insertions(+), 10 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/70e33d96/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4f769a1..c5e5335 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.9 + * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) * Remove support for non-JavaScript UDFs (CASSANDRA-12883) * Fix DynamicEndpointSnitch noop in multi-datacenter situations (CASSANDRA-13074) * cqlsh copy-from: encode column names to avoid primary key parsing errors (CASSANDRA-12909) http://git-wip-us.apache.org/repos/asf/cassandra/blob/70e33d96/src/java/org/apache/cassandra/cql3/ColumnCondition.java -- diff --git a/src/java/org/apache/cassandra/cql3/ColumnCondition.java b/src/java/org/apache/cassandra/cql3/ColumnCondition.java index c7b5ddb..3412e71 100644 --- a/src/java/org/apache/cassandra/cql3/ColumnCondition.java +++ b/src/java/org/apache/cassandra/cql3/ColumnCondition.java @@ -25,6 +25,7 @@ import com.google.common.collect.Iterables; import com.google.common.collect.Iterators; import org.apache.cassandra.config.ColumnDefinition; +import org.apache.cassandra.cql3.Term.Terminal; import org.apache.cassandra.cql3.functions.Function; import org.apache.cassandra.db.Cell; import org.apache.cassandra.db.ColumnFamily; @@ -267,12 +268,26 @@ public class ColumnCondition assert !(column.type instanceof CollectionType) && condition.collectionElement == null; assert condition.operator == Operator.IN; if (condition.inValues == null) -this.inValues = ((Lists.Value) condition.value.bind(options)).getElements(); +{ +Terminal terminal = condition.value.bind(options); + +if (terminal == null) +throw new InvalidRequestException("Invalid null list in IN condition"); + +if (terminal == Constants.UNSET_VALUE) +throw new InvalidRequestException("Invalid 'unset' value in condition"); + +this.inValues = ((Lists.Value) terminal).getElements(); +} else { this.inValues = new ArrayList<>(condition.inValues.size()); for (Term value : condition.inValues) -this.inValues.add(value.bindAndGet(options)); +{ +ByteBuffer buffer = value.bindAndGet(options); +if (buffer != ByteBufferUtil.UNSET_BYTE_BUFFER) +this.inValues.add(value.bindAndGet(options)); +} } } @@ -378,12 +393,22 @@ public class ColumnCondition this.collectionElement = condition.collectionElement.bindAndGet(options); if (condition.inValues == null) -this.inValues = ((Lists.Value) condition.value.bind(options)).getElements(); +{ +Terminal terminal = condition.value.bind(options); +if (terminal == Constants.UNSET_VALUE) +throw new InvalidRequestException("Invalid 'unset' value in condition"); +this.inValues = ((Lists.Value) terminal).getElements(); +} else { this.inValues = new ArrayList<>(condition.inValues.size()); for (Term value : condition.inValues) -this.inValues.add(value.bindAndGet(options)); +{ +ByteBuffer buffer = value.bindAndGet(options); +// We want to ignore unset values +if (buffer != ByteBufferUtil.UNSET_BYTE_BUFFER) +this.inValues.add(buffer); +}
[jira] [Comment Edited] (CASSANDRA-12977) column expire to null can still be retrieved using not null value in where clause
[ https://issues.apache.org/jira/browse/CASSANDRA-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842902#comment-15842902 ] Benjamin Lerer edited comment on CASSANDRA-12977 at 1/27/17 2:07 PM: - Sorry, it took so long. I could not reproduce the problem with 2.1.5 because it seems that we did not tagged that version but I was able to reproduce it with 2.1.4. As 2.1.5 a quite old version, I tested your scenario on the latest 2.1 version and the problem is not here anymore. Due to that I will fix the bug as "Cannot Reproduce". was (Author: blerer): Sorry, it tooks so long. I could not reproduce the problem with 2.1.5 because it seems that we did not tagged that version but I was able to reproduce it with 2.1.4. As 2.1.5 a quite old version, I tested your scenario on the latest 2.1 version and the problem is not here anymore. Due to that I will fix the bug as "Cannot Reproduce". > column expire to null can still be retrieved using not null value in where > clause > - > > Key: CASSANDRA-12977 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12977 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: cql 5.0.1 > cassandra 2.1.5 >Reporter: ruilonghe1988 >Assignee: Benjamin Lerer > Attachments: attatchment.txt, attatchment.txt > > > 1. first create table: > create table device_share( > device_id text primary key, > share_status text, > share_expire boolean > ); > CREATE INDEX expireIndex ON device_share (share_expire); > create index statusIndex ON device_share (share_status); > 2.insert a new record: > insert into device_share(device_id,share_status,share_expire) values > ('d1','ready',false); > 3. update the share_expire value to fase with ttl 20 > update device_share using ttl 20 set share_expire = false where device_id = > 'd1'; > 4.after 20 seconds, can retrieve the record with condition where share_expire > = false, but the record in the console show the share_expire is null. > cqlsh:test> select * from device_share where device_id ='d1' and > share_status='ready' and share_expire = false allow filtering; > device_id | share_expire | share_status > ---+--+-- > d1 | null |ready > is this a bug? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-12977) column expire to null can still be retrieved using not null value in where clause
[ https://issues.apache.org/jira/browse/CASSANDRA-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer resolved CASSANDRA-12977. Resolution: Cannot Reproduce > column expire to null can still be retrieved using not null value in where > clause > - > > Key: CASSANDRA-12977 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12977 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: cql 5.0.1 > cassandra 2.1.5 >Reporter: ruilonghe1988 >Assignee: Benjamin Lerer > Attachments: attatchment.txt, attatchment.txt > > > 1. first create table: > create table device_share( > device_id text primary key, > share_status text, > share_expire boolean > ); > CREATE INDEX expireIndex ON device_share (share_expire); > create index statusIndex ON device_share (share_status); > 2.insert a new record: > insert into device_share(device_id,share_status,share_expire) values > ('d1','ready',false); > 3. update the share_expire value to fase with ttl 20 > update device_share using ttl 20 set share_expire = false where device_id = > 'd1'; > 4.after 20 seconds, can retrieve the record with condition where share_expire > = false, but the record in the console show the share_expire is null. > cqlsh:test> select * from device_share where device_id ='d1' and > share_status='ready' and share_expire = false allow filtering; > device_id | share_expire | share_status > ---+--+-- > d1 | null |ready > is this a bug? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12977) column expire to null can still be retrieved using not null value in where clause
[ https://issues.apache.org/jira/browse/CASSANDRA-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842902#comment-15842902 ] Benjamin Lerer commented on CASSANDRA-12977: Sorry, it tooks so long. I could not reproduce the problem with 2.1.5 because it seems that we did not tagged that version but I was able to reproduce it with 2.1.4. As 2.1.5 a quite old version, I tested your scenario on the latest 2.1 version and the problem is not here anymore. Due to that I will fix the bug as "Cannot Reproduce". > column expire to null can still be retrieved using not null value in where > clause > - > > Key: CASSANDRA-12977 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12977 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: cql 5.0.1 > cassandra 2.1.5 >Reporter: ruilonghe1988 >Assignee: Benjamin Lerer > Attachments: attatchment.txt, attatchment.txt > > > 1. first create table: > create table device_share( > device_id text primary key, > share_status text, > share_expire boolean > ); > CREATE INDEX expireIndex ON device_share (share_expire); > create index statusIndex ON device_share (share_status); > 2.insert a new record: > insert into device_share(device_id,share_status,share_expire) values > ('d1','ready',false); > 3. update the share_expire value to fase with ttl 20 > update device_share using ttl 20 set share_expire = false where device_id = > 'd1'; > 4.after 20 seconds, can retrieve the record with condition where share_expire > = false, but the record in the console show the share_expire is null. > cqlsh:test> select * from device_share where device_id ='d1' and > share_status='ready' and share_expire = false allow filtering; > device_id | share_expire | share_status > ---+--+-- > d1 | null |ready > is this a bug? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12981) Refactor ColumnCondition
[ https://issues.apache.org/jira/browse/CASSANDRA-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-12981: Status: Ready to Commit (was: Patch Available) > Refactor ColumnCondition > > > Key: CASSANDRA-12981 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12981 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > > {{ColumnCondition}} has become really difficult to understand and modify. We > should separate the logic to make improvements and maintenance easier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12981) Refactor ColumnCondition
[ https://issues.apache.org/jira/browse/CASSANDRA-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15842879#comment-15842879 ] Alex Petrov commented on CASSANDRA-12981: - Thank you! CI looks good, trunk patch looks good, too. +1 > Refactor ColumnCondition > > > Key: CASSANDRA-12981 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12981 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > > {{ColumnCondition}} has become really difficult to understand and modify. We > should separate the logic to make improvements and maintenance easier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)