This is an automated email from the ASF dual-hosted git repository.

bereng pushed a commit to branch cassandra-4.1
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cassandra-4.1 by this push:
     new a508ac06e9 Update the CQL version for the 4.1 release
a508ac06e9 is described below

commit a508ac06e9a24272d2f69f7a7546640626c16fc4
Author: Bereng <[email protected]>
AuthorDate: Thu May 5 09:53:38 2022 +0200

    Update the CQL version for the 4.1 release
    
    patch by Berenguer Blasi; reviewed by Benjamin Lerer for CASSANDRA-17570
---
 README.asc                                         |    2 +-
 doc/cql3/CQL.textile                               | 2572 --------------------
 doc/modules/cassandra/pages/cql/changes.adoc       |   10 +
 doc/modules/cassandra/pages/new/virtualtables.adoc |    4 +-
 .../org/apache/cassandra/cql3/QueryProcessor.java  |    2 +-
 5 files changed, 14 insertions(+), 2576 deletions(-)

diff --git a/README.asc b/README.asc
index 3c40bf4970..f484aa277e 100644
--- a/README.asc
+++ b/README.asc
@@ -39,7 +39,7 @@ be sitting in front of a prompt:
 
 ----
 Connected to Test Cluster at localhost:9160.
-[cqlsh 6.0.0 | Cassandra 4.0.2 | CQL spec 3.4.5 | Native protocol v5]
+[cqlsh 6.0.0 | Cassandra 4.1 | CQL spec 3.4.6 | Native protocol v5]
 Use HELP for help.
 cqlsh>
 ----
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
deleted file mode 100644
index 8bedf19a78..0000000000
--- a/doc/cql3/CQL.textile
+++ /dev/null
@@ -1,2572 +0,0 @@
-<!--
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
--->
-
-h1. Cassandra Query Language (CQL) v3.4.3
-
-
-
- <span id="tableOfContents">
-
-{toc:maxLevel=3}
-
- </span>
-
-h2. CQL Syntax
-
-h3. Preamble
-
-This document describes the Cassandra Query Language (CQL) version 3. CQL v3 
is not backward compatible with CQL v2 and differs from it in numerous ways. 
Note that this document describes the last version of the languages. However, 
the "changes":#changes section provides the diff between the different versions 
of CQL v3.
-
-CQL v3 offers a model very close to SQL in the sense that data is put in 
_tables_ containing _rows_ of _columns_. For that reason, when used in this 
document, these terms (tables, rows and columns) have the same definition than 
they have in SQL. But please note that as such, they do *not* refer to the 
concept of rows and columns found in the internal implementation of Cassandra 
and in the thrift and CQL v2 API.
-
-
-h3. Conventions
-
-To aid in specifying the CQL syntax, we will use the following conventions in 
this document:
-
-* Language rules will be given in a 
"BNF":http://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form -like notation:
-
-bc(syntax). <start> ::= TERMINAL <non-terminal1> <non-terminal1>
-
-* Nonterminal symbols will have @<angle brackets>@.
-* As additional shortcut notations to BNF, we'll use traditional regular 
expression's symbols (@?@, @+@ and @*@) to signify that a given symbol is 
optional and/or can be repeated. We'll also allow parentheses to group symbols 
and the @[<characters>]@ notation to represent any one of @<characters>@.
-* The grammar is provided for documentation purposes and leave some minor 
details out. For instance, the last column definition in a @CREATE TABLE@ 
statement is optional but supported if present even though the provided grammar 
in this document suggest it is not supported. 
-* Sample code will be provided in a code block:
-
-bc(sample). SELECT sample_usage FROM cql;
-
-* References to keywords or pieces of CQL code in running text will be shown 
in a @fixed-width font@.
-
-h3(#identifiers). Identifiers and keywords
-
-p. The CQL language uses _identifiers_ (or _names_) to identify tables, 
columns and other objects. An identifier is a token matching the regular 
expression @[a-zA-Z]@@[a-zA-Z0-9_]@@*@.
-
-p. A number of such identifiers, like @SELECT@ or @WITH@, are _keywords_. They 
have a fixed meaning for the language and most are reserved. The list of those 
keywords can be found in "Appendix A":#appendixA.
-
-p. Identifiers and (unquoted) keywords are case insensitive. Thus @SELECT@ is 
the same than @select@ or @sElEcT@, and @myId@ is the same than @myid@ or 
@MYID@ for instance. A convention often used (in particular by the samples of 
this documentation) is to use upper case for keywords and lower case for other 
identifiers.
-
-p. There is a second kind of identifiers called _quoted identifiers_ defined 
by enclosing an arbitrary sequence of characters in double-quotes(@"@). Quoted 
identifiers are never keywords. Thus @"select"@ is not a reserved keyword and 
can be used to refer to a column, while @select@ would raise a parse error. 
Also, contrarily to unquoted identifiers and keywords, quoted identifiers are 
case sensitive (@"My Quoted Id"@ is _different_ from @"my quoted id"@). A fully 
lowercase quoted identif [...]
-
-p. *Warning*: _quoted identifiers_ allows to declare columns with arbitrary 
names, and those can sometime clash with specific names used by the server. For 
instance, when using conditional update, the server will respond with a 
result-set containing a special result named @"[applied]"@. If you've declared 
a column with such a name, this could potentially confuse some tools and should 
be avoided. In general, unquoted identifiers should be preferred but if you use 
quoted identifiers, it is [...]
-
-h3(#constants). Constants
-
-CQL defines the following kind of _constants_: strings, integers, floats, 
booleans, uuids and blobs:
-* A string constant is an arbitrary sequence of characters characters enclosed 
by single-quote(@'@). One can include a single-quote in a string by repeating 
it, e.g. @'It''s raining today'@. Those are not to be confused with quoted 
identifiers that use double-quotes.
-* An integer constant is defined by @'-'?[0-9]+@.
-* A float constant is defined by @'-'?[0-9]+('.'[0-9]*)?([eE][+-]?[0-9+])?@. 
On top of that, @NaN@ and @Infinity@ are also float constants.
-* A boolean constant is either @true@ or @false@ up to case-insensitivity 
(i.e. @True@ is a valid boolean constant).
-* A "UUID":http://en.wikipedia.org/wiki/Universally_unique_identifier constant 
is defined by @hex{8}-hex{4}-hex{4}-hex{4}-hex{12}@ where @hex@ is an 
hexadecimal character, e.g. @[0-9a-fA-F]@ and @{4}@ is the number of such 
characters.
-* A blob constant is an hexadecimal number defined by @0[xX](hex)+@ where 
@hex@ is an hexadecimal character, e.g. @[0-9a-fA-F]@.
-
-For how these constants are typed, see the "data types section":#types.
-
-h3. Comments
-
-A comment in CQL is a line beginning by either double dashes (@--@) or double 
slash (@//@).
-
-Multi-line comments are also supported through enclosure within @/*@ and @*/@ 
(but nesting is not supported).
-
-bc(sample). 
--- This is a comment
-// This is a comment too
-/* This is
-   a multi-line comment */
-
-h3(#statements). Statements
-
-CQL consists of statements. As in SQL, these statements can be divided in 3 
categories:
-* Data definition statements, that allow to set and change the way data is 
stored.
-* Data manipulation statements, that allow to change data
-* Queries, to look up data
-
-All statements end with a semicolon (@;@) but that semicolon can be omitted 
when dealing with a single statement. The supported statements are described in 
the following sections. When describing the grammar of said statements, we will 
reuse the non-terminal symbols defined below:
-
-bc(syntax).. 
-<identifier> ::= any quoted or unquoted identifier, excluding reserved keywords
- <tablename> ::= (<identifier> '.')? <identifier>
-
-    <string> ::= a string constant
-   <integer> ::= an integer constant
-     <float> ::= a float constant
-    <number> ::= <integer> | <float>
-      <uuid> ::= a uuid constant
-   <boolean> ::= a boolean constant
-       <hex> ::= a blob constant
-
-  <constant> ::= <string>
-               | <number>
-               | <uuid>
-               | <boolean>
-               | <hex>
-  <variable> ::= '?'
-               | ':' <identifier>
-      <term> ::= <constant>
-               | <collection-literal>
-               | <variable>
-               | <function> '(' (<term> (',' <term>)*)? ')'
-               | CAST '(' <term> AS <type> ')'
-
-  <collection-literal> ::= <map-literal>
-                         | <set-literal>
-                         | <list-literal>
-         <map-literal> ::= '{' ( <term> ':' <term> ( ',' <term> ':' <term> )* 
)? '}'
-         <set-literal> ::= '{' ( <term> ( ',' <term> )* )? '}'
-        <list-literal> ::= '[' ( <term> ( ',' <term> )* )? ']'
-
-    <function> ::= <ident>
-
-  <properties> ::= <property> (AND <property>)*
-    <property> ::= <identifier> '=' ( <identifier> | <constant> | 
<map-literal> )
-p. 
-Please note that not every possible productions of the grammar above will be 
valid in practice. Most notably, @<variable>@ and nested @<collection-literal>@ 
are currently not allowed inside @<collection-literal>@.
-
-p. A @<variable>@ can be either anonymous (a question mark (@?@)) or named (an 
identifier preceded by @:@). Both declare a bind variables for "prepared 
statements":#preparedStatement. The only difference between an anymous and a 
named variable is that a named one will be easier to refer to (how exactly 
depends on the client driver used).
-
-p. The @<properties>@ production is use by statement that create and alter 
keyspaces and tables. Each @<property>@ is either a _simple_ one, in which case 
it just has a value, or a _map_ one, in which case it's value is a map grouping 
sub-options. The following will refer to one or the other as the _kind_ 
(_simple_ or _map_) of the property.
-
-p. A @<tablename>@ will be used to identify a table. This is an identifier 
representing the table name that can be preceded by a keyspace name. The 
keyspace name, if provided, allow to identify a table in another keyspace than 
the currently active one (the currently active keyspace is set through the <a 
href=":#useStmt"><tt>USE</tt></a> statement).
-
-p. For supported @<function>@, see the section on "functions":#functions.
-
-p. Strings can be either enclosed with single quotes or two dollar characters. 
The second syntax has been introduced to allow strings that contain single 
quotes. Typical candidates for such strings are source code fragments for 
user-defined functions.
-
-__Sample:__
-
-bc(sample).. 
-  'some string value'
-
-  $$double-dollar string can contain single ' quotes$$
-p. 
-
-h3(#preparedStatement). Prepared Statement
-
-CQL supports _prepared statements_. Prepared statement is an optimization that 
allows to parse a query only once but execute it multiple times with different 
concrete values.
-
-In a statement, each time a column value is expected (in the data manipulation 
and query statements), a @<variable>@ (see above) can be used instead. A 
statement with bind variables must then be _prepared_. Once it has been 
prepared, it can executed by providing concrete values for the bind variables. 
The exact procedure to prepare a statement and execute a prepared statement 
depends on the CQL driver used and is beyond the scope of this document.
-
-In addition to providing column values, bind markers may be used to provide 
values for @LIMIT@, @TIMESTAMP@, and @TTL@ clauses.  If anonymous bind markers 
are used, the names for the query parameters will be @[limit]@, @[timestamp]@, 
and @[ttl]@, respectively.
-
-
-h2(#dataDefinition). Data Definition
-
-h3(#createKeyspaceStmt). CREATE KEYSPACE
-
-__Syntax:__
-
-bc(syntax).. 
-<create-keyspace-stmt> ::= CREATE KEYSPACE (IF NOT EXISTS)? <identifier> WITH 
<properties>
-p. 
-__Sample:__
-
-bc(sample).. 
-CREATE KEYSPACE Excelsior
-           WITH replication = {'class': 'SimpleStrategy', 'replication_factor' 
: 3};
-
-CREATE KEYSPACE Excalibur
-           WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1' : 1, 
'DC2' : 3}
-            AND durable_writes = false;
-p. 
-The @CREATE KEYSPACE@ statement creates a new top-level _keyspace_. A keyspace 
is a namespace that defines a replication strategy and some options for a set 
of tables. Valid keyspaces names are identifiers composed exclusively of 
alphanumerical characters and whose length is lesser or equal to 32. Note that 
as identifiers, keyspace names are case insensitive: use a quoted identifier 
for case sensitive keyspace names.
-
-The supported @<properties>@ for @CREATE KEYSPACE@ are:
-
-|_. name          |_. kind   |_. mandatory |_. default |_. description|
-|@replication@    | _map_    | yes         |           | The replication 
strategy and options to use for the keyspace. |
-|@durable_writes@ | _simple_ | no          | true      | Whether to use the 
commit log for updates on this keyspace (disable this option at your own 
risk!). |
-
-The @replication@ @<property>@ is mandatory. It must at least contains the 
@'class'@ sub-option which defines the replication strategy class to use. The 
rest of the sub-options depends on that replication strategy class. By default, 
Cassandra support the following @'class'@:
-* @'SimpleStrategy'@: A simple strategy that defines a simple replication 
factor for the whole cluster. The only sub-options supported is 
@'replication_factor'@ to define that replication factor and is mandatory.
-* @'NetworkTopologyStrategy'@: A replication strategy that allows to set the 
replication factor independently for each data-center. The rest of the 
sub-options are key-value pairs where each time the key is the name of a 
datacenter and the value the replication factor for that data-center.
-
-Attempting to create an already existing keyspace will return an error unless 
the @IF NOT EXISTS@ option is used. If it is used, the statement will be a 
no-op if the keyspace already exists.
-
-h3(#useStmt). USE
-
-__Syntax:__
-
-bc(syntax). <use-stmt> ::= USE <identifier>
-
-__Sample:__
-
-bc(sample). USE myApp;
-
-The @USE@ statement takes an existing keyspace name as argument and set it as 
the per-connection current working keyspace. All subsequent keyspace-specific 
actions will be performed in the context of the selected keyspace, unless 
"otherwise specified":#statements, until another USE statement is issued or the 
connection terminates.
-
-h3(#alterKeyspaceStmt). ALTER KEYSPACE
-
-__Syntax:__
-
-bc(syntax).. 
-<create-keyspace-stmt> ::= ALTER KEYSPACE <identifier> WITH <properties>
-p. 
-__Sample:__
-
-bc(sample).. 
-ALTER KEYSPACE Excelsior
-          WITH replication = {'class': 'SimpleStrategy', 'replication_factor' 
: 4};
-
-p. 
-The @ALTER KEYSPACE@ statement alters the properties of an existing keyspace. 
The supported @<properties>@ are the same as for the "@CREATE 
KEYSPACE@":#createKeyspaceStmt statement.
-
-
-h3(#dropKeyspaceStmt). DROP KEYSPACE
-
-__Syntax:__
-
-bc(syntax).  <drop-keyspace-stmt> ::= DROP KEYSPACE ( IF EXISTS )? <identifier>
-
-__Sample:__
-
-bc(sample). DROP KEYSPACE myApp;
-
-A @DROP KEYSPACE@ statement results in the immediate, irreversible removal of 
an existing keyspace, including all column families in it, and all data 
contained in those column families.
-
-If the keyspace does not exists, the statement will return an error, unless 
@IF EXISTS@ is used in which case the operation is a no-op.
-
-
-h3(#createTableStmt). CREATE TABLE
-
-__Syntax:__
-
-bc(syntax).. 
-<create-table-stmt> ::= CREATE ( TABLE | COLUMNFAMILY ) ( IF NOT EXISTS )? 
<tablename>
-                          '(' <column-definition> ( ',' <column-definition> )* 
')'
-                          ( WITH <option> ( AND <option>)* )?
-
-<column-definition> ::= <identifier> <type> ( STATIC )? ( PRIMARY KEY )?
-                      | PRIMARY KEY '(' <partition-key> ( ',' <identifier> )* 
')'
-
-<partition-key> ::= <identifier>
-                  | '(' <identifier> (',' <identifier> )* ')'
-
-<option> ::= <property>
-           | COMPACT STORAGE
-           | CLUSTERING ORDER
-p. 
-__Sample:__
-
-bc(sample).. 
-CREATE TABLE monkeySpecies (
-    species text PRIMARY KEY,
-    common_name text,
-    population varint,
-    average_size int
-) WITH comment='Important biological records';
-
-CREATE TABLE timeline (
-    userid uuid,
-    posted_month int,
-    posted_time uuid,
-    body text,
-    posted_by text,
-    PRIMARY KEY (userid, posted_month, posted_time)
-) WITH compaction = { 'class' : 'LeveledCompactionStrategy' };
-p. 
-The @CREATE TABLE@ statement creates a new table. Each such table is a set of 
_rows_ (usually representing related entities) for which it defines a number of 
properties. A table is defined by a "name":#createTableName, it defines the <a 
href="#createTableColumn"><it>columns</it></a> composing rows of the table and 
have a number of "options":#createTableOptions. Note that the @CREATE 
COLUMNFAMILY@ syntax is supported as an alias for @CREATE TABLE@ (for 
historical reasons).
-
-Attempting to create an already existing table will return an error unless the 
@IF NOT EXISTS@ option is used. If it is used, the statement will be a no-op if 
the table already exists.
-
-h4(#createTableName). @<tablename>@
-
-Valid table names are the same as valid "keyspace names":#createKeyspaceStmt 
(up to 32 characters long alphanumerical identifiers). If the table name is 
provided alone, the table is created within the current keyspace (see <a 
href="#useStmt"><tt>USE</tt></a>), but if it is prefixed by an existing 
keyspace name (see "@<tablename>@":#statements grammar), it is created in the 
specified keyspace (but does *not* change the current keyspace).
-
-
-h4(#createTableColumn). @<column-definition>@
-
-A @CREATE TABLE@ statement defines the columns that rows of the table can 
have. A _column_ is defined by its name (an identifier) and its type (see the 
"data types":#types section for more details on allowed types and their 
properties).
-
-Within a table, a row is uniquely identified by its @PRIMARY KEY@ (or more 
simply the key), and hence all table definitions *must* define a PRIMARY KEY 
(and only one). A @PRIMARY KEY@ is composed of one or more of the columns 
defined in the table. If the @PRIMARY KEY@ is only one column, this can be 
specified directly after the column definition. Otherwise, it must be specified 
by following @PRIMARY KEY@ by the comma-separated list of column names 
composing the key within parenthesis. No [...]
-
-bc(sample). 
-CREATE TABLE t (
-    k int PRIMARY KEY,
-    other text
-)
-
-is equivalent to
-
-bc(sample). 
-CREATE TABLE t (
-    k int,
-    other text,
-    PRIMARY KEY (k)
-)
-
-h4(#createTablepartitionClustering). Partition key and clustering columns
-
-In CQL, the order in which columns are defined for the @PRIMARY KEY@ matters. 
The first column of the key is called the __partition key__. It has the 
property that all the rows sharing the same partition key (even across table in 
fact) are stored on the same physical node. Also, insertion/update/deletion on 
rows sharing the same partition key for a given table are performed 
__atomically__ and in __isolation__. Note that it is possible to have a 
composite partition key, i.e. a partition k [...]
-
-The remaining columns of the @PRIMARY KEY@ definition, if any, are called 
__clustering columns. On a given physical node, rows for a given partition key 
are stored in the order induced by the clustering columns, making the retrieval 
of rows in that clustering order particularly efficient (see <a 
href="#selectStmt"><tt>SELECT</tt></a>).
-
-h4(#createTableStatic). @STATIC@ columns
-
-Some columns can be declared as @STATIC@ in a table definition. A column that 
is static will be "shared" by all the rows belonging to the same partition 
(having the same partition key). For instance, in:
-
-bc(sample). 
-CREATE TABLE test (
-    pk int,
-    t int,
-    v text,
-    s text static,
-    PRIMARY KEY (pk, t)
-);
-INSERT INTO test(pk, t, v, s) VALUES (0, 0, 'val0', 'static0');
-INSERT INTO test(pk, t, v, s) VALUES (0, 1, 'val1', 'static1');
-SELECT * FROM test WHERE pk=0 AND t=0;
-
-the last query will return @'static1'@ as value for @s@, since @s@ is static 
and thus the 2nd insertion modified this "shared" value. Note however that 
static columns are only static within a given partition, and if in the example 
above both rows where from different partitions (i.e. if they had different 
value for @pk@), then the 2nd insertion would not have modified the value of 
@s@ for the first row.
-
-A few restrictions applies to when static columns are allowed:
-* tables with the @COMPACT STORAGE@ option (see below) cannot have them
-* a table without clustering columns cannot have static columns (in a table 
without clustering columns, every partition has only one row, and so every 
column is inherently static).
-* only non @PRIMARY KEY@ columns can be static
-
-
-h4(#createTableOptions). @<option>@
-
-The @CREATE TABLE@ statement supports a number of options that controls the 
configuration of a new table. These options can be specified after the @WITH@ 
keyword.
-
-The first of these option is @COMPACT STORAGE@. This option is mainly targeted 
towards backward compatibility for definitions created before CQL3 (see 
"www.datastax.com/dev/blog/thrift-to-cql3":http://www.datastax.com/dev/blog/thrift-to-cql3
 for more details).  The option also provides a slightly more compact layout of 
data on disk but at the price of diminished flexibility and extensibility for 
the table.  Most notably, @COMPACT STORAGE@ tables cannot have collections nor 
static columns [...]
-
-Another option is @CLUSTERING ORDER@. It allows to define the ordering of rows 
on disk. It takes the list of the clustering column names with, for each of 
them, the on-disk order (Ascending or descending). Note that this option 
affects "what @ORDER BY@ are allowed during @SELECT@":#selectOrderBy.
-
-Table creation supports the following other @<property>@:
-
-|_. option                    |_. kind   |_. default   |_. description|
-|@comment@                    | _simple_ | none        | A free-form, 
human-readable comment.|
-|@gc_grace_seconds@           | _simple_ | 864000      | Time to wait before 
garbage collecting tombstones (deletion markers).|
-|@bloom_filter_fp_chance@     | _simple_ | 0.00075     | The target 
probability of false positive of the sstable bloom filters. Said bloom filters 
will be sized to provide the provided probability (thus lowering this value 
impact the size of bloom filters in-memory and on-disk)|
-|@default_time_to_live@       | _simple_ | 0           | The default 
expiration time ("TTL") in seconds for a table.|
-|@compaction@                 | _map_    | _see below_ | Compaction options, 
see "below":#compactionOptions.|
-|@compression@                | _map_    | _see below_ | Compression options, 
see "below":#compressionOptions.|
-|@caching@                    | _map_    | _see below_ | Caching options, see 
"below":#cachingOptions.|
-
-h4(#compactionOptions). Compaction options
-
-The @compaction@ property must at least define the @'class'@ sub-option, that 
defines the compaction strategy class to use. The default supported class are 
@'SizeTieredCompactionStrategy'@, @'LeveledCompactionStrategy'@, 
@'DateTieredCompactionStrategy'@ and @'TimeWindowCompactionStrategy'@. Custom 
strategy can be provided by specifying the full class name as a "string 
constant":#constants. The rest of the sub-options depends on the chosen class. 
The sub-options supported by the default c [...]
-
-|_. option                               |_. supported compaction strategy |_. 
default    |_. description |
-| @enabled@                              | _all_                           | 
true         | A boolean denoting whether compaction should be enabled or not.|
-| @tombstone_threshold@                  | _all_                           | 
0.2          | A ratio such that if a sstable has more than this ratio of 
gcable tombstones over all contained columns, the sstable will be compacted 
(with no other sstables) for the purpose of purging those tombstones. |
-| @tombstone_compaction_interval@        | _all_                           | 1 
day        | The minimum time to wait after an sstable creation time before 
considering it for "tombstone compaction", where "tombstone compaction" is the 
compaction triggered if the sstable has more gcable tombstones than 
@tombstone_threshold@. |
-| @unchecked_tombstone_compaction@       | _all_                           | 
false        | Setting this to true enables more aggressive tombstone 
compactions - single sstable tombstone compactions will run without checking 
how likely it is that they will be successful. |
-| @min_sstable_size@                     | SizeTieredCompactionStrategy    | 
50MB         | The size tiered strategy groups SSTables to compact in buckets. 
A bucket groups SSTables that differs from less than 50% in size.  However, for 
small sizes, this would result in a bucketing that is too fine grained. 
@min_sstable_size@ defines a size threshold (in bytes) below which all SSTables 
belong to one unique bucket|
-| @min_threshold@                        | SizeTieredCompactionStrategy    | 4 
           | Minimum number of SSTables needed to start a minor compaction.|
-| @max_threshold@                        | SizeTieredCompactionStrategy    | 
32           | Maximum number of SSTables processed by one minor compaction.|
-| @bucket_low@                           | SizeTieredCompactionStrategy    | 
0.5          | Size tiered consider sstables to be within the same bucket if 
their size is within [average_size * @bucket_low@, average_size * @bucket_high@ 
] (i.e the default groups sstable whose sizes diverges by at most 50%)|
-| @bucket_high@                          | SizeTieredCompactionStrategy    | 
1.5          | Size tiered consider sstables to be within the same bucket if 
their size is within [average_size * @bucket_low@, average_size * @bucket_high@ 
] (i.e the default groups sstable whose sizes diverges by at most 50%).|
-| @sstable_size_in_mb@                   | LeveledCompactionStrategy       | 
5MB          | The target size (in MB) for sstables in the leveled strategy. 
Note that while sstable sizes should stay less or equal to 
@sstable_size_in_mb@, it is possible to exceptionally have a larger sstable as 
during compaction, data for a given partition key are never split into 2 
sstables|
-| @timestamp_resolution@                 | DateTieredCompactionStrategy    | 
MICROSECONDS | The timestamp resolution used when inserting data, could be 
MILLISECONDS, MICROSECONDS etc (should be understandable by Java TimeUnit) - 
don't change this unless you do mutations with USING TIMESTAMP 
<non_microsecond_timestamps> (or equivalent directly in the client)|
-| @base_time_seconds@                    | DateTieredCompactionStrategy    | 
60           | The base size of the time windows. |
-| @max_sstable_age_days@                 | DateTieredCompactionStrategy    | 
365          | SSTables only containing data that is older than this will never 
be compacted. |
-| @timestamp_resolution@                 | TimeWindowCompactionStrategy    | 
MICROSECONDS | The timestamp resolution used when inserting data, could be 
MILLISECONDS, MICROSECONDS etc (should be understandable by Java TimeUnit) - 
don't change this unless you do mutations with USING TIMESTAMP 
<non_microsecond_timestamps> (or equivalent directly in the client)|
-| @compaction_window_unit@               | TimeWindowCompactionStrategy    | 
DAYS         | The Java TimeUnit used for the window size, set in conjunction 
with @compaction_window_size@. Must be one of DAYS, HOURS, MINUTES |
-| @compaction_window_size@               | TimeWindowCompactionStrategy    | 1 
           | The number of @compaction_window_unit@ units that make up a time 
window. |
-| @unsafe_aggressive_sstable_expiration@ | TimeWindowCompactionStrategy    | 
false        | Expired sstables will be dropped without checking its data is 
shadowing other sstables. This is a potentially risky option that can lead to 
data loss or deleted data re-appearing, going beyond what 
`unchecked_tombstone_compaction` does for single  sstable compaction. Due to 
the risk the jvm must also be started with 
`-Dcassandra.unsafe_aggressive_sstable_expiration=true`. |
-
-h4(#compressionOptions). Compression options
-
-For the @compression@ property, the following sub-options are available:
-
-|_. option                 |_. default        |_. description |
-| @class@                  | LZ4Compressor    | The compression algorithm to 
use. Default compressor are: LZ4Compressor, SnappyCompressor and 
DeflateCompressor. Use @'enabled' : false@ to disable compression. Custom 
compressor can be provided by specifying the full class name as a "string 
constant":#constants.|
-| @enabled@                | true             | By default compression is 
enabled. To disable it, set @enabled@ to @false@
-| @chunk_length_in_kb@     | 64KB             | On disk SSTables are 
compressed by block (to allow random reads). This defines the size (in KB) of 
said block. Bigger values may improve the compression rate, but increases the 
minimum size of data to be read from disk for a read |
-| @crc_check_chance@       | 1.0              | When compression is enabled, 
each compressed block includes a checksum of that block for the purpose of 
detecting disk bitrot and avoiding the propagation of corruption to other 
replica. This option defines the probability with which those checksums are 
checked during read. By default they are always checked. Set to 0 to disable 
checksum checking and to 0.5 for instance to check them every other read|
-
-h4(#cachingOptions). Caching options
-
-For the @caching@ property, the following sub-options are available:
-
-|_. option              |_. default        |_. description |
-|@keys@                 | ALL   | Whether to cache keys ("key cache") for this 
table. Valid values are: @ALL@ and @NONE@.|
-|@rows_per_partition@   | NONE   | The amount of rows to cache per partition 
("row cache"). If an integer @n@ is specified, the first @n@ queried rows of a 
partition will be cached. Other possible options are @ALL@, to cache all rows 
of a queried partition, or @NONE@ to disable row caching.|
-
-h4. Other considerations:
-
-* When "inserting":#insertStmt / "updating":#updateStmt a given row, not all 
columns needs to be defined (except for those part of the key), and missing 
columns occupy no space on disk. Furthermore, adding new columns (see <a 
href=#alterStmt><tt>ALTER TABLE</tt></a>) is a constant time operation. There 
is thus no need to try to anticipate future usage (or to cry when you haven't) 
when creating a table.
-
-
-h3(#alterTableStmt). ALTER TABLE
-
-__Syntax:__
-
-bc(syntax).. 
-<alter-table-stmt> ::= ALTER (TABLE | COLUMNFAMILY) <tablename> <instruction>
-
-<instruction> ::= ADD   <identifier> <type>
-                | ADD   ( <identifier> <type> ( , <identifier> <type> )* )
-                | DROP  <identifier>
-                | DROP  ( <identifier> ( , <identifier> )* )
-                | DROP COMPACT STORAGE
-                | WITH  <option> ( AND <option> )*
-p. 
-__Sample:__
-
-bc(sample).. 
-ALTER TABLE addamsFamily
-
-ALTER TABLE addamsFamily
-ADD gravesite varchar;
-
-ALTER TABLE addamsFamily
-WITH comment = 'A most excellent and useful column family';
-p. 
-The @ALTER@ statement is used to manipulate table definitions. It allows for 
adding new columns, dropping existing ones, or updating the table options. As 
with table creation, @ALTER COLUMNFAMILY@ is allowed as an alias for @ALTER 
TABLE@.
-
-The @<tablename>@ is the table name optionally preceded by the keyspace name.  
The @<instruction>@ defines the alteration to perform:
-* @ADD@: Adds a new column to the table. The @<identifier>@ for the new column 
must not conflict with an existing column. Moreover, columns cannot be added to 
tables defined with the @COMPACT STORAGE@ option.
-* @DROP@: Removes a column from the table. Dropped columns will immediately 
become unavailable in the queries and will not be included in compacted 
sstables in the future. If a column is readded, queries won't return values 
written before the column was last dropped. It is assumed that timestamps 
represent actual time, so if this is not your case, you should NOT readd 
previously dropped columns. Columns can't be dropped from tables defined with 
the @COMPACT STORAGE@ option.
-* @DROP COMPACT STORAGE@: Removes Thrift compatibility mode from the table.
-* @WITH@: Allows to update the options of the table. The "supported 
@<option>@":#createTableOptions (and syntax) are the same as for the @CREATE 
TABLE@ statement except that @COMPACT STORAGE@ is not supported. Note that 
setting any @compaction@ sub-options has the effect of erasing all previous 
@compaction@ options, so you  need to re-specify all the sub-options if you 
want to keep them. The same note applies to the set of @compression@ 
sub-options.
-
-h4. CQL type compatibility:
-
-CQL data types may be converted only as the following table.
-
-|_. Data type may be altered to:|_.Data type|
-|timestamp|bigint|
-|ascii, bigint, boolean, date, decimal, double, float, inet, int, smallint, 
text, time, timestamp, timeuuid, tinyint, uuid, varchar, varint|blob|
-|int|date|
-|ascii, varchar|text|
-|bigint|time|
-|bigint|timestamp|
-|timeuuid|uuid|
-|ascii, text|varchar|
-|bigint, int, timestamp|varint|
-
-Clustering columns have stricter requirements, only the below conversions are 
allowed.
-
-|_. Data type may be altered to:|_.Data type|
-|ascii, text, varchar|blob|
-|ascii, varchar|text|
-|ascii, text|varchar|
-
-h3(#dropTableStmt). DROP TABLE
-
-__Syntax:__
-
-bc(syntax). <drop-table-stmt> ::= DROP TABLE ( IF EXISTS )? <tablename>
-
-__Sample:__
-
-bc(sample). DROP TABLE worldSeriesAttendees;
-
-The @DROP TABLE@ statement results in the immediate, irreversible removal of a 
table, including all data contained in it. As for table creation, @DROP 
COLUMNFAMILY@ is allowed as an alias for @DROP TABLE@.
-
-If the table does not exist, the statement will return an error, unless @IF 
EXISTS@ is used in which case the operation is a no-op.
-
-h3(#truncateStmt). TRUNCATE
-
-__Syntax:__
-
-bc(syntax). <truncate-stmt> ::= TRUNCATE ( TABLE | COLUMNFAMILY )? <tablename>
-
-__Sample:__
-
-bc(sample). TRUNCATE superImportantData;
-
-The @TRUNCATE@ statement permanently removes all data from a table.
-
-
-h3(#createIndexStmt). CREATE INDEX
-
-__Syntax:__
-
-bc(syntax).. 
-<create-index-stmt> ::= CREATE ( CUSTOM )? INDEX ( IF NOT EXISTS )? ( 
<indexname> )?
-                            ON <tablename> '(' <index-identifier> ')'
-                            ( USING <string> ( WITH OPTIONS = <map-literal> )? 
)?
-
-<index-identifier> ::= <identifier>
-                     | keys( <identifier> )
-p. 
-__Sample:__
-
-bc(sample). 
-CREATE INDEX userIndex ON NerdMovies (user);
-CREATE INDEX ON Mutants (abilityId);
-CREATE INDEX ON users (keys(favs));
-CREATE CUSTOM INDEX ON users (email) USING 'path.to.the.IndexClass';
-CREATE CUSTOM INDEX ON users (email) USING 'path.to.the.IndexClass' WITH 
OPTIONS = {'storage': '/mnt/ssd/indexes/'};
-
-The @CREATE INDEX@ statement is used to create a new (automatic) secondary 
index for a given (existing) column in a given table. A name for the index 
itself can be specified before the @ON@ keyword, if desired. If data already 
exists for the column, it will be indexed asynchronously. After the index is 
created, new data for the column is indexed automatically at insertion time.
-
-Attempting to create an already existing index will return an error unless the 
@IF NOT EXISTS@ option is used. If it is used, the statement will be a no-op if 
the index already exists.
-
-h4(#keysIndex). Indexes on Map Keys
-
-When creating an index on a "map column":#map, you may index either the keys 
or the values.  If the column identifier is placed within the @keys()@ 
function, the index will be on the map keys, allowing you to use @CONTAINS KEY@ 
in @WHERE@ clauses.  Otherwise, the index will be on the map values.
-
-h3(#dropIndexStmt). DROP INDEX
-
-__Syntax:__
-
-bc(syntax).  <drop-index-stmt> ::= DROP INDEX ( IF EXISTS )? ( <keyspace> '.' 
)? <identifier>
-
-__Sample:__
-
-bc(sample).. 
-DROP INDEX userIndex;
-
-DROP INDEX userkeyspace.address_index;
-p. 
-The @DROP INDEX@ statement is used to drop an existing secondary index. The 
argument of the statement is the index name, which may optionally specify the 
keyspace of the index.
-
-If the index does not exists, the statement will return an error, unless @IF 
EXISTS@ is used in which case the operation is a no-op.
-
-
-h3(#createMVStmt). CREATE MATERIALIZED VIEW
-
-__Syntax:__
-
-bc(syntax).. 
-<create-table-stmt> ::= CREATE MATERIALIZED VIEW ( IF NOT EXISTS )? <viewname> 
AS
-                          SELECT ( '(' <identifier> ( ',' <identifier> ) * ')' 
| '*' )
-                          FROM <tablename>
-                          ( WHERE <where-clause> )?
-                          PRIMARY KEY '(' <partition-key> ( ',' <identifier> 
)* ')'
-                          ( WITH <option> ( AND <option>)* )?
-p. 
-__Sample:__
-
-bc(sample).. 
-CREATE MATERIALIZED VIEW monkeySpecies_by_population AS
-    SELECT *
-    FROM monkeySpecies
-    WHERE population IS NOT NULL AND species IS NOT NULL
-    PRIMARY KEY (population, species)
-    WITH comment='Allow query by population instead of species';
-p. 
-The @CREATE MATERIALIZED VIEW@ statement creates a new materialized view. Each 
such view is a set of _rows_ which corresponds to rows which are present in the 
underlying, or base, table specified in the @SELECT@ statement. A materialized 
view cannot be directly updated, but updates to the base table will cause 
corresponding updates in the view.
-
-Attempting to create an already existing materialized view will return an 
error unless the @IF NOT EXISTS@ option is used. If it is used, the statement 
will be a no-op if the materialized view already exists.
-
-h4(#createMVWhere). @WHERE@ Clause
-
-The @<where-clause>@ is similar to the "where clause of a @SELECT@ 
statement":#selectWhere, with a few differences.  First, the where clause must 
contain an expression that disallows @NULL@ values in columns in the view's 
primary key.  If no other restriction is desired, this can be accomplished with 
an @IS NOT NULL@ expression.  Second, only columns which are in the base 
table's primary key may be restricted with expressions other than @IS NOT 
NULL@.  (Note that this second restriction  [...]
-
-h3(#alterMVStmt). ALTER MATERIALIZED VIEW
-
-__Syntax:__
-
-bc(syntax). <alter-materialized-view-stmt> ::= ALTER MATERIALIZED VIEW 
<viewname>
-                                                 WITH <option> ( AND <option> 
)*
-
-p.
-The @ALTER MATERIALIZED VIEW@ statement allows options to be update; these 
options are the same as <a href="#createTableOptions">@CREATE TABLE@'s 
options</a>.
-
-
-h3(#dropMVStmt). DROP MATERIALIZED VIEW
-
-__Syntax:__
-
-bc(syntax). <drop-materialized-stmt> ::= DROP MATERIALIZED VIEW ( IF EXISTS )? 
<tablename>
-
-__Sample:__
-
-bc(sample). DROP MATERIALIZED VIEW monkeySpecies_by_population;
-
-The @DROP MATERIALIZED VIEW@ statement is used to drop an existing 
materialized view.
-
-If the materialized view does not exists, the statement will return an error, 
unless @IF EXISTS@ is used in which case the operation is a no-op.
-
-h3(#createTypeStmt). CREATE TYPE
-
-__Syntax:__
-
-bc(syntax).. 
-<create-type-stmt> ::= CREATE TYPE ( IF NOT EXISTS )? <typename>
-                         '(' <field-definition> ( ',' <field-definition> )* ')'
-
-<typename> ::= ( <keyspace-name> '.' )? <identifier>
-
-<field-definition> ::= <identifier> <type>
-
-p. 
-__Sample:__
-
-bc(sample).. 
-CREATE TYPE address (
-    street_name text,
-    street_number int,
-    city text,
-    state text,
-    zip int
-)
-
-CREATE TYPE work_and_home_addresses (
-    home_address address,
-    work_address address
-)
-p. 
-The @CREATE TYPE@ statement creates a new user-defined type.  Each type is a 
set of named, typed fields.  Field types may be any valid type, including 
collections and other existing user-defined types.
-
-Attempting to create an already existing type will result in an error unless 
the @IF NOT EXISTS@ option is used.  If it is used, the statement will be a 
no-op if the type already exists.
-
-h4(#createTypeName). @<typename>@
-
-Valid type names are identifiers.  The names of existing CQL types and 
"reserved type names":#appendixB may not be used.
-
-If the type name is provided alone, the type is created with the current 
keyspace (see <a href="#useStmt"><tt>USE</tt></a>). If it is prefixed by an 
existing keyspace name, the type is created within the specified keyspace 
instead of the current keyspace.
-
-h3(#alterTypeStmt). ALTER TYPE
-
-__Syntax:__
-
-bc(syntax).. 
-<alter-type-stmt> ::= ALTER TYPE <typename> <instruction>
-
-<instruction> ::= ADD <field-name> <type>
-                | RENAME <field-name> TO <field-name> ( AND <field-name> TO 
<field-name> )*
-p. 
-__Sample:__
-
-bc(sample).. 
-ALTER TYPE address ADD country text
-
-ALTER TYPE address RENAME zip TO zipcode AND street_name TO street
-p. 
-The @ALTER TYPE@ statement is used to manipulate type definitions. It allows 
for adding new fields, renaming existing fields, or changing the type of 
existing fields.
-
-h3(#dropTypeStmt). DROP TYPE
-
-__Syntax:__
-
-bc(syntax).. 
-<drop-type-stmt> ::= DROP TYPE ( IF EXISTS )? <typename>
-p. 
-The @DROP TYPE@ statement results in the immediate, irreversible removal of a 
type.  Attempting to drop a type that is still in use by another type or a 
table will result in an error.
-
-If the type does not exist, an error will be returned unless @IF EXISTS@ is 
used, in which case the operation is a no-op.
-
-h3(#createTriggerStmt). CREATE TRIGGER
-
-__Syntax:__
-
-bc(syntax).. 
-<create-trigger-stmt> ::= CREATE TRIGGER ( IF NOT EXISTS )? ( <triggername> )?
-                            ON <tablename> 
-                            USING <string>
-
-p. 
-__Sample:__
-
-bc(sample). 
-CREATE TRIGGER myTrigger ON myTable USING 
'org.apache.cassandra.triggers.InvertedIndex';
-
-The actual logic that makes up the trigger can be written in any Java (JVM) 
language and exists outside the database. You place the trigger code in a 
@lib/triggers@ subdirectory of the Cassandra installation directory, it loads 
during cluster startup, and exists on every node that participates in a 
cluster. The trigger defined on a table fires before a requested DML statement 
occurs, which ensures the atomicity of the transaction.
-
-h3(#dropTriggerStmt). DROP TRIGGER
-
-__Syntax:__
-
-bc(syntax).. 
-<drop-trigger-stmt> ::= DROP TRIGGER ( IF EXISTS )? ( <triggername> )?
-                            ON <tablename>
-p. 
-__Sample:__
-
-bc(sample). 
-DROP TRIGGER myTrigger ON myTable;
-
-@DROP TRIGGER@ statement removes the registration of a trigger created using 
@CREATE TRIGGER@.
-
-h3(#createFunctionStmt). CREATE FUNCTION
-
-__Syntax:__
-
-bc(syntax).. 
-<create-function-stmt> ::= CREATE ( OR REPLACE )? 
-                            FUNCTION ( IF NOT EXISTS )?
-                            ( <keyspace> '.' )? <function-name>
-                            '(' <arg-name> <arg-type> ( ',' <arg-name> 
<arg-type> )* ')'
-                            ( CALLED | RETURNS NULL ) ON NULL INPUT
-                            RETURNS <type>
-                            LANGUAGE <language>
-                            AS <body>
-p. 
-__Sample:__
-
-bc(sample). 
-CREATE OR REPLACE FUNCTION somefunction
-    ( somearg int, anotherarg text, complexarg frozen<someUDT>, listarg 
list<bigint> )
-    RETURNS NULL ON NULL INPUT
-    RETURNS text
-    LANGUAGE java
-    AS $$
-       // some Java code
-    $$;
-CREATE FUNCTION akeyspace.fname IF NOT EXISTS
-    ( someArg int )
-    CALLED ON NULL INPUT
-    RETURNS text
-    LANGUAGE java
-    AS $$
-       // some Java code
-    $$;
-
-@CREATE FUNCTION@ creates or replaces a user-defined function.
-
-h4(#functionSignature). Function Signature
-
-Signatures are used to distinguish individual functions. The signature 
consists of:
-
-# The fully qualified function name - i.e _keyspace_ plus _function-name_
-# The concatenated list of all argument types
-
-Note that keyspace names, function names and argument types are subject to the 
default naming conventions and case-sensitivity rules.
-
-@CREATE FUNCTION@ with the optional @OR REPLACE@ keywords either creates a 
function or replaces an existing one with the same signature. A @CREATE 
FUNCTION@ without @OR REPLACE@ fails if a function with the same signature 
already exists.
-
-Behavior on invocation with @null@ values must be defined for each function. 
There are two options:
-
-# @RETURNS NULL ON NULL INPUT@ declares that the function will always return 
@null@ if any of the input arguments is @null@.
-# @CALLED ON NULL INPUT@ declares that the function will always be executed.
-
-If the optional @IF NOT EXISTS@ keywords are used, the function will only be 
created if another function with the same signature does not exist.
-
-@OR REPLACE@ and @IF NOT EXIST@ cannot be used together.
-
-Functions belong to a keyspace. If no keyspace is specified in 
@<function-name>@, the current keyspace is used (i.e. the keyspace specified 
using the "@USE@":#useStmt statement). It is not possible to create a 
user-defined function in one of the system keyspaces.
-
-See the section on "user-defined functions":#udfs for more information.
-
-h3(#dropFunctionStmt). DROP FUNCTION
-
-__Syntax:__
-
-bc(syntax).. 
-<drop-function-stmt> ::= DROP FUNCTION ( IF EXISTS )?
-                         ( <keyspace> '.' )? <function-name>
-                         ( '(' <arg-type> ( ',' <arg-type> )* ')' )?
-
-p. 
-__Sample:__
-
-bc(sample). 
-DROP FUNCTION myfunction;
-DROP FUNCTION mykeyspace.afunction;
-DROP FUNCTION afunction ( int );
-DROP FUNCTION afunction ( text );
-
-@DROP FUNCTION@ statement removes a function created using @CREATE FUNCTION@.
-You must specify the argument types ("signature":#functionSignature ) of the 
function to drop if there are multiple functions with the same name but a 
different signature (overloaded functions).
-
-@DROP FUNCTION@ with the optional @IF EXISTS@ keywords drops a function if it 
exists.
-
-h3(#createAggregateStmt). CREATE AGGREGATE
-
-__Syntax:__
-
-bc(syntax).. 
-<create-aggregate-stmt> ::= CREATE ( OR REPLACE )? 
-                            AGGREGATE ( IF NOT EXISTS )?
-                            ( <keyspace> '.' )? <aggregate-name>
-                            '(' <arg-type> ( ',' <arg-type> )* ')'
-                            SFUNC <state-functionname>
-                            STYPE <state-type>
-                            ( FINALFUNC <final-functionname> )?
-                            ( INITCOND <init-cond> )?
-p. 
-__Sample:__
-
-bc(sample). 
-CREATE AGGREGATE myaggregate ( val text )
-  SFUNC myaggregate_state
-  STYPE text
-  FINALFUNC myaggregate_final
-  INITCOND 'foo';
-
-See the section on "user-defined aggregates":#udas for a complete example.
-
-@CREATE AGGREGATE@ creates or replaces a user-defined aggregate.
-
-@CREATE AGGREGATE@ with the optional @OR REPLACE@ keywords either creates an 
aggregate or replaces an existing one with the same signature. A @CREATE 
AGGREGATE@ without @OR REPLACE@ fails if an aggregate with the same signature 
already exists.
-
-@CREATE AGGREGATE@ with the optional @IF NOT EXISTS@ keywords either creates 
an aggregate if it does not already exist.
-
-@OR REPLACE@ and @IF NOT EXIST@ cannot be used together.
-
-Aggregates belong to a keyspace. If no keyspace is specified in 
@<aggregate-name>@, the current keyspace is used (i.e. the keyspace specified 
using the "@USE@":#useStmt statement). It is not possible to create a 
user-defined aggregate in one of the system keyspaces.
-
-Signatures for user-defined aggregates follow the "same 
rules":#functionSignature as for user-defined functions.
-
-@STYPE@ defines the type of the state value and must be specified.
-
-The optional @INITCOND@ defines the initial state value for the aggregate. It 
defaults to @null@. A non-@null@ @INITCOND@ must be specified for state 
functions that are declared with @RETURNS NULL ON NULL INPUT@.
-
-@SFUNC@ references an existing function to be used as the state modifying 
function. The type of first argument of the state function must match @STYPE@. 
The remaining argument types of the state function must match the argument 
types of the aggregate function. State is not updated for state functions 
declared with @RETURNS NULL ON NULL INPUT@ and called with @null@.
-
-The optional @FINALFUNC@ is called just before the aggregate result is 
returned. It must take only one argument with type @STYPE@. The return type of 
the @FINALFUNC@ may be a different type. A final function declared with 
@RETURNS NULL ON NULL INPUT@ means that the aggregate's return value will be 
@null@, if the last state is @null@.
-
-If no @FINALFUNC@ is defined, the overall return type of the aggregate 
function is @STYPE@.  If a @FINALFUNC@ is defined, it is the return type of 
that function.
-
-See the section on "user-defined aggregates":#udas for more information.
-
-h3(#dropAggregateStmt). DROP AGGREGATE
-
-__Syntax:__
-
-bc(syntax).. 
-<drop-aggregate-stmt> ::= DROP AGGREGATE ( IF EXISTS )?
-                         ( <keyspace> '.' )? <aggregate-name>
-                         ( '(' <arg-type> ( ',' <arg-type> )* ')' )?
-p. 
-
-__Sample:__
-
-bc(sample). 
-DROP AGGREGATE myAggregate;
-DROP AGGREGATE myKeyspace.anAggregate;
-DROP AGGREGATE someAggregate ( int );
-DROP AGGREGATE someAggregate ( text );
-
-The @DROP AGGREGATE@ statement removes an aggregate created using @CREATE 
AGGREGATE@.  You must specify the argument types of the aggregate to drop if 
there are multiple aggregates with the same name but a different signature 
(overloaded aggregates).
-
-@DROP AGGREGATE@ with the optional @IF EXISTS@ keywords drops an aggregate if 
it exists, and does nothing if a function with the signature does not exist.
-
-Signatures for user-defined aggregates follow the "same 
rules":#functionSignature as for user-defined functions.
-
-h2(#dataManipulation). Data Manipulation
-
-h3(#insertStmt). INSERT
-
-__Syntax:__
-
-bc(syntax).. 
-<insertStatement> ::= INSERT INTO <tablename>
-                      ( ( <name-list> VALUES <value-list> )
-                      | ( JSON <string> ))
-                      ( IF NOT EXISTS )?
-                      ( USING <option> ( AND <option> )* )?
-
-<names-list> ::= '(' <identifier> ( ',' <identifier> )* ')'
-
-<value-list> ::= '(' <term> ( ',' <term> )* ')'
-
-<option> ::= TIMESTAMP <integer>
-           | TTL <integer>
-p. 
-__Sample:__
-
-bc(sample).. 
-INSERT INTO NerdMovies (movie, director, main_actor, year)
-                VALUES ('Serenity', 'Joss Whedon', 'Nathan Fillion', 2005)
-USING TTL 86400;
-
-INSERT INTO NerdMovies JSON '{"movie": "Serenity", "director": "Joss Whedon", 
"year": 2005}'
-p. 
-The @INSERT@ statement writes one or more columns for a given row in a table. 
Note that since a row is identified by its @PRIMARY KEY@, at least the columns 
composing it must be specified.  The list of columns to insert to must be 
supplied when using the @VALUES@ syntax.  When using the @JSON@ syntax, they 
are optional.  See the section on "@INSERT JSON@":#insertJson for more details.
-
-Note that unlike in SQL, @INSERT@ does not check the prior existence of the 
row by default: the row is created if none existed before, and updated 
otherwise. Furthermore, there is no mean to know which of creation or update 
happened.
-
-It is however possible to use the @IF NOT EXISTS@ condition to only insert if 
the row does not exist prior to the insertion. But please note that using @IF 
NOT EXISTS@ will incur a non negligible performance cost (internally, Paxos 
will be used) so this should be used sparingly.
-
-All updates for an @INSERT@ are applied atomically and in isolation.
-
-Please refer to the "@UPDATE@":#updateOptions section for information on the 
@<option>@ available and to the "collections":#collections section for use of 
@<collection-literal>@. Also note that @INSERT@ does not support counters, 
while @UPDATE@ does.
-
-h3(#updateStmt). UPDATE
-
-__Syntax:__
-
-bc(syntax).. 
-<update-stmt> ::= UPDATE <tablename>
-                  ( USING <option> ( AND <option> )* )?
-                  SET <assignment> ( ',' <assignment> )*
-                  WHERE <where-clause>
-                  ( IF <condition> ( AND condition )* )?
-
-<assignment> ::= <identifier> '=' <term>
-               | <identifier> '=' <identifier> ('+' | '-') (<int-term> | 
<set-literal> | <list-literal>)
-               | <identifier> '=' <identifier> '+' <map-literal>
-               | <identifier> '[' <term> ']' '=' <term>
-               | <identifier> '.' <field> '=' <term>
-
-<condition> ::= <identifier> <op> <term>
-              | <identifier> IN <in-values>
-              | <identifier> '[' <term> ']' <op> <term>
-              | <identifier> '[' <term> ']' IN <in-values>
-              | <identifier> '.' <field> <op> <term>
-              | <identifier> '.' <field> IN <in-values>
-
-<op> ::= '<' | '<=' | '=' | '!=' | '>=' | '>'
-<in-values> ::= (<variable> | '(' ( <term> ( ',' <term> )* )? ')')
-
-<where-clause> ::= <relation> ( AND <relation> )*
-
-<relation> ::= <identifier> '=' <term>
-             | '(' <identifier> (',' <identifier>)* ')' '=' <term-tuple>
-             | <identifier> IN '(' ( <term> ( ',' <term>)* )? ')'
-             | <identifier> IN <variable>
-             | '(' <identifier> (',' <identifier>)* ')' IN '(' ( <term-tuple> 
( ',' <term-tuple>)* )? ')'
-             | '(' <identifier> (',' <identifier>)* ')' IN <variable>
-
-<option> ::= TIMESTAMP <integer>
-           | TTL <integer>
-p. 
-__Sample:__
-
-bc(sample).. 
-UPDATE NerdMovies USING TTL 400
-SET director = 'Joss Whedon',
-    main_actor = 'Nathan Fillion',
-    year = 2005
-WHERE movie = 'Serenity';
-
-UPDATE UserActions SET total = total + 2 WHERE user = 
B70DE1D0-9908-4AE3-BE34-5573E5B09F14 AND action = 'click';
-p. 
-The @UPDATE@ statement writes one or more columns for a given row in a table. 
The @<where-clause>@ is used to select the row to update and must include all 
columns composing the @PRIMARY KEY@. Other columns values are specified through 
@<assignment>@ after the @SET@ keyword.
-
-Note that unlike in SQL, @UPDATE@ does not check the prior existence of the 
row by default (except through the use of @<condition>@, see below): the row is 
created if none existed before, and updated otherwise. Furthermore, there are 
no means to know whether a creation or update occurred.
-
-It is however possible to use the conditions on some columns through @IF@, in 
which case the row will not be updated unless the conditions are met. But, 
please note that using @IF@ conditions will incur a non-negligible performance 
cost (internally, Paxos will be used) so this should be used sparingly.
-
-In an @UPDATE@ statement, all updates within the same partition key are 
applied atomically and in isolation.
-
-The @c = c + 3@ form of @<assignment>@ is used to increment/decrement 
counters. The identifier after the '=' sign *must* be the same than the one 
before the '=' sign (Only increment/decrement is supported on counters, not the 
assignment of a specific value).
-
-The @id = id + <collection-literal>@ and @id[value1] = value2@ forms of 
@<assignment>@ are for collections. Please refer to the "relevant 
section":#collections for more details.
-
-The @id.field = <term>@ form of @<assignemt>@ is for setting the value of a 
single field on a non-frozen user-defined types.
-
-h4(#updateOptions). @<options>@
-
-The @UPDATE@ and @INSERT@ statements support the following options:
-* @TIMESTAMP@: sets the timestamp for the operation. If not specified, the 
coordinator will use the current time (in microseconds) at the start of 
statement execution as the timestamp. This is usually a suitable default.
-* @TTL@: specifies an optional Time To Live (in seconds) for the inserted 
values. If set, the inserted values are automatically removed from the database 
after the specified time. Note that the TTL concerns the inserted values, not 
the columns themselves. This means that any subsequent update of the column 
will also reset the TTL (to whatever TTL is specified in that update). By 
default, values never expire. A TTL of 0 is equivalent to no TTL. If the table 
has a default_time_to_live, a T [...]
-
-
-h3(#deleteStmt). DELETE
-
-__Syntax:__
-
-bc(syntax).. 
-<delete-stmt> ::= DELETE ( <selection> ( ',' <selection> )* )?
-                  FROM <tablename>
-                  ( USING TIMESTAMP <integer>)?
-                  WHERE <where-clause>
-                  ( IF ( EXISTS | ( <condition> ( AND <condition> )*) ) )?
-
-<selection> ::= <identifier>
-              | <identifier> '[' <term> ']'
-              | <identifier> '.' <field>
-
-<where-clause> ::= <relation> ( AND <relation> )*
-
-<relation> ::= <identifier> <op> <term>
-             | '(' <identifier> (',' <identifier>)* ')' <op> <term-tuple>
-             | <identifier> IN '(' ( <term> ( ',' <term>)* )? ')'
-             | <identifier> IN <variable>
-             | '(' <identifier> (',' <identifier>)* ')' IN '(' ( <term-tuple> 
( ',' <term-tuple>)* )? ')'
-             | '(' <identifier> (',' <identifier>)* ')' IN <variable>
-
-<op> ::= '=' | '<' | '>' | '<=' | '>='
-<in-values> ::= (<variable> | '(' ( <term> ( ',' <term> )* )? ')')
-
-<condition> ::= <identifier> (<op> | '!=') <term>
-              | <identifier> IN <in-values>
-              | <identifier> '[' <term> ']' (<op> | '!=') <term>
-              | <identifier> '[' <term> ']' IN <in-values>
-              | <identifier> '.' <field> (<op> | '!=') <term>
-              | <identifier> '.' <field> IN <in-values>
-
-p. 
-__Sample:__
-
-bc(sample).. 
-DELETE FROM NerdMovies USING TIMESTAMP 1240003134 WHERE movie = 'Serenity';
-
-DELETE phone FROM Users WHERE userid IN (C73DE1D3-AF08-40F3-B124-3FF3E5109F22, 
B70DE1D0-9908-4AE3-BE34-5573E5B09F14);
-p. 
-The @DELETE@ statement deletes columns and rows. If column names are provided 
directly after the @DELETE@ keyword, only those columns are deleted from the 
row indicated by the @<where-clause>@.  The @id[value]@ syntax in @<selection>@ 
is for non-frozen collections (please refer to the "collection 
section":#collections for more details).  The @id.field@ syntax is for the 
deletion of non-frozen user-defined types.  Otherwise, whole rows are removed. 
The @<where-clause>@ specifies which row [...]
-
-@DELETE@ supports the @TIMESTAMP@ option with the same semantics as the 
"@UPDATE@":#updateStmt statement.
-
-In a @DELETE@ statement, all deletions within the same partition key are 
applied atomically and in isolation.
-
-A @DELETE@ operation can be conditional through the use of an @IF@ clause, 
similar to @UPDATE@ and @INSERT@ statements. However, as with @INSERT@ and 
@UPDATE@ statements, this will incur a non-negligible performance cost 
(internally, Paxos will be used) and so should be used sparingly.
-
-
-h3(#batchStmt). BATCH
-
-__Syntax:__
-
-bc(syntax).. 
-<batch-stmt> ::= BEGIN ( UNLOGGED | COUNTER ) BATCH
-                 ( USING <option> ( AND <option> )* )?
-                    <modification-stmt> ( ';' <modification-stmt> )*
-                 APPLY BATCH
-
-<modification-stmt> ::= <insert-stmt>
-                      | <update-stmt>
-                      | <delete-stmt>
-
-<option> ::= TIMESTAMP <integer>
-p. 
-__Sample:__
-
-bc(sample). 
-BEGIN BATCH
-  INSERT INTO users (userid, password, name) VALUES ('user2', 'ch@ngem3b', 
'second user');
-  UPDATE users SET password = 'ps22dhds' WHERE userid = 'user3';
-  INSERT INTO users (userid, password) VALUES ('user4', 'ch@ngem3c');
-  DELETE name FROM users WHERE userid = 'user1';
-APPLY BATCH;
-
-The @BATCH@ statement group multiple modification statements 
(insertions/updates and deletions) into a single statement. It serves several 
purposes:
-# It saves network round-trips between the client and the server (and 
sometimes between the server coordinator and the replicas) when batching 
multiple updates.
-# All updates in a @BATCH@ belonging to a given partition key are performed in 
isolation.
-# By default, all operations in the batch are performed as @LOGGED@, to ensure 
all mutations eventually complete (or none will).  See the notes on 
"@UNLOGGED@":#unloggedBatch for more details.
-
-Note that:
-* @BATCH@ statements may only contain @UPDATE@, @INSERT@ and @DELETE@ 
statements.
-* Batches are _not_ a full analogue for SQL transactions.
-* If a timestamp is not specified for each operation, then all operations will 
be applied with the same timestamp. Due to Cassandra's conflict resolution 
procedure in the case of "timestamp 
ties":http://wiki.apache.org/cassandra/FAQ#clocktie, operations may be applied 
in an order that is different from the order they are listed in the @BATCH@ 
statement. To force a particular operation ordering, you must specify 
per-operation timestamps.
-
-h4(#unloggedBatch). @UNLOGGED@
-
-By default, Cassandra uses a batch log to ensure all operations in a batch 
eventually complete or none will (note however that operations are only 
isolated within a single partition).
-
-There is a performance penalty for batch atomicity when a batch spans multiple 
partitions. If you do not want to incur this penalty, you can tell Cassandra to 
skip the batchlog with the @UNLOGGED@ option. If the @UNLOGGED@ option is used, 
a failed batch might leave the patch only partly applied.
-
-h4(#counterBatch). @COUNTER@
-
-Use the @COUNTER@ option for batched counter updates.  Unlike other updates in 
Cassandra, counter updates are not idempotent.
-
-h4(#batchOptions). @<option>@
-
-@BATCH@ supports both the @TIMESTAMP@ option, with similar semantic to the one 
described in the "@UPDATE@":#updateOptions statement (the timestamp applies to 
all the statement inside the batch). However, if used, @TIMESTAMP@ *must not* 
be used in the statements within the batch.
-
-
-h2(#queries). Queries
-
-
-h3(#selectStmt). SELECT
-
-__Syntax:__
-
-bc(syntax).. 
-<select-stmt> ::= SELECT ( JSON )? <select-clause>
-                  FROM <tablename>
-                  ( WHERE <where-clause> )?
-                  ( GROUP BY <group-by>)?
-                  ( ORDER BY <order-by> )?
-                  ( PER PARTITION LIMIT <integer> )?
-                  ( LIMIT <integer> )?
-                  ( ALLOW FILTERING )?
-
-<select-clause> ::= DISTINCT? <selection-list>
-
-<selection-list> ::= <selector> (AS <identifier>)? ( ',' <selector> (AS 
<identifier>)? )*
-                   | '*'
-
-<selector> ::= <identifier>
-             | <term>
-             | WRITETIME '(' <identifier> ')'
-             | COUNT '(' '*' ')'
-             | TTL '(' <identifier> ')'
-             | CAST '(' <selector> AS <type> ')'
-             | <function> '(' (<selector> (',' <selector>)*)? ')'
-             | <selector> '.' <identifier>
-             | <selector> '[' <term> ']'
-             | <selector> '[' <term>? .. <term>? ']'
-
-<where-clause> ::= <relation> ( AND <relation> )*
-
-<relation> ::= <identifier> <op> <term>
-             | '(' <identifier> (',' <identifier>)* ')' <op> <term-tuple>
-             | <identifier> IN '(' ( <term> ( ',' <term>)* )? ')'
-             | '(' <identifier> (',' <identifier>)* ')' IN '(' ( <term-tuple> 
( ',' <term-tuple>)* )? ')'
-             | TOKEN '(' <identifier> ( ',' <identifer>)* ')' <op> <term>
-
-<op> ::= '=' | '<' | '>' | '<=' | '>=' | CONTAINS | CONTAINS KEY
-<group-by> ::= (<identifier>,)* (<identifier> | <function>)
-<order-by> ::= <ordering> ( ',' <odering> )*
-<ordering> ::= <identifer> ( ASC | DESC )?
-<term-tuple> ::= '(' <term> (',' <term>)* ')'
-p. 
-__Sample:__
-
-bc(sample).. 
-SELECT name, occupation FROM users WHERE userid IN (199, 200, 207);
-
-SELECT JSON name, occupation FROM users WHERE userid = 199;
-
-SELECT name AS user_name, occupation AS user_occupation FROM users;
-
-SELECT time, value
-FROM events
-WHERE event_type = 'myEvent'
-  AND time > '2011-02-03'
-  AND time <= '2012-01-01'
-
-SELECT COUNT(*) FROM users;
-
-SELECT COUNT(*) AS user_count FROM users;
-
-p. 
-The @SELECT@ statements reads one or more columns for one or more rows in a 
table. It returns a result-set of rows, where each row contains the collection 
of columns corresponding to the query.  If the @JSON@ keyword is used, the 
results for each row will contain only a single column named "json".  See the 
section on "@SELECT JSON@":#selectJson for more details.
-
-h4(#selectSelection). @<select-clause>@
-
-The @<select-clause>@ determines which columns needs to be queried and 
returned in the result-set. It consists of either the comma-separated list of 
<selector> or the wildcard character (@*@) to select all the columns defined 
for the table. Please note that for wildcard @SELECT@ queries the order of 
columns returned is not specified and is not guaranteed to be stable between 
Cassandra versions.
-
-A @<selector>@ is either a column name to retrieve or a @<function>@ of one or 
more @<term>@s. The function allowed are the same as for @<term>@ and are 
described in the "function section":#functions. In addition to these generic 
functions, the @WRITETIME@ (resp. @TTL@) function allows to select the 
timestamp of when the column was inserted (resp. the time to live (in seconds) 
for the column (or null if the column has no expiration set)) and the 
"@CAST@":#castFun function can be used to  [...]
-
-Additionally, individual values of maps and sets can be selected using @[ 
<term> ]@. For maps, this will return the value corresponding to the key, if 
such entry exists. For sets, this will return the key that is selected if it 
exists and is thus mainly a way to check element existence. It is also possible 
to select a slice of a set or map with @[ <term> ... <term> @], where both 
bound can be omitted.
-
-Any @<selector>@ can be aliased using @AS@ keyword (see examples). Please note 
that @<where-clause>@ and @<order-by>@ clause should refer to the columns by 
their original names and not by their aliases.
-
-The @COUNT@ keyword can be used with parenthesis enclosing @*@. If so, the 
query will return a single result: the number of rows matching the query. Note 
that @COUNT(1)@ is supported as an alias.
-
-h4(#selectWhere). @<where-clause>@
-
-The @<where-clause>@ specifies which rows must be queried. It is composed of 
relations on the columns that are part of the @PRIMARY KEY@ and/or have a 
"secondary index":#createIndexStmt defined on them.
-
-Not all relations are allowed in a query. For instance, non-equal relations 
(where @IN@ is considered as an equal relation) on a partition key are not 
supported (but see the use of the @TOKEN@ method below to do non-equal queries 
on the partition key). Moreover, for a given partition key, the clustering 
columns induce an ordering of rows and relations on them is restricted to the 
relations that allow to select a *contiguous* (for the ordering) set of rows. 
For instance, given
-
-bc(sample). 
-CREATE TABLE posts (
-    userid text,
-    blog_title text,
-    posted_at timestamp,
-    entry_title text,
-    content text,
-    category int,
-    PRIMARY KEY (userid, blog_title, posted_at)
-)
-
-The following query is allowed:
-
-bc(sample). 
-SELECT entry_title, content FROM posts WHERE userid='john doe' AND 
blog_title='John''s Blog' AND posted_at >= '2012-01-01' AND posted_at < 
'2012-01-31'
-
-But the following one is not, as it does not select a contiguous set of rows 
(and we suppose no secondary indexes are set):
-
-bc(sample). 
-// Needs a blog_title to be set to select ranges of posted_at
-SELECT entry_title, content FROM posts WHERE userid='john doe' AND posted_at 
>= '2012-01-01' AND posted_at < '2012-01-31'
-
-When specifying relations, the @TOKEN@ function can be used on the @PARTITION 
KEY@ column to query. In that case, rows will be selected based on the token of 
their @PARTITION_KEY@ rather than on the value. Note that the token of a key 
depends on the partitioner in use, and that in particular the RandomPartitioner 
won't yield a meaningful order. Also note that ordering partitioners always 
order token values by bytes (so even if the partition key is of type int, 
@token(-1) > token(0)@ in p [...]
-
-bc(sample). 
-SELECT * FROM posts WHERE token(userid) > token('tom') AND token(userid) < 
token('bob')
-
-Moreover, the @IN@ relation is only allowed on the last column of the 
partition key and on the last column of the full primary key.
-
-It is also possible to "group" @CLUSTERING COLUMNS@ together in a relation 
using the tuple notation. For instance:
-
-bc(sample). 
-SELECT * FROM posts WHERE userid='john doe' AND (blog_title, posted_at) > 
('John''s Blog', '2012-01-01')
-
-will request all rows that sorts after the one having "John's Blog" as 
@blog_tile@ and '2012-01-01' for @posted_at@ in the clustering order. In 
particular, rows having a @post_at <= '2012-01-01'@ will be returned as long as 
their @blog_title > 'John''s Blog'@, which wouldn't be the case for:
-
-bc(sample). 
-SELECT * FROM posts WHERE userid='john doe' AND blog_title > 'John''s Blog' 
AND posted_at > '2012-01-01'
-
-The tuple notation may also be used for @IN@ clauses on @CLUSTERING COLUMNS@:
-
-bc(sample). 
-SELECT * FROM posts WHERE userid='john doe' AND (blog_title, posted_at) IN 
(('John''s Blog', '2012-01-01), ('Extreme Chess', '2014-06-01'))
-
-The @CONTAINS@ operator may only be used on collection columns (lists, sets, 
and maps).  In the case of maps, @CONTAINS@ applies to the map values. The 
@CONTAINS KEY@ operator may only be used on map columns and applies to the map 
keys.
-
-h4(#selectOrderBy). @<order-by>@
-
-The @ORDER BY@ option allows to select the order of the returned results. It 
takes as argument a list of column names along with the order for the column 
(@ASC@ for ascendant and @DESC@ for descendant, omitting the order being 
equivalent to @ASC@). Currently the possible orderings are limited (which 
depends on the table "@CLUSTERING ORDER@":#createTableOptions ):
-* if the table has been defined without any specific @CLUSTERING ORDER@, then 
then allowed orderings are the order induced by the clustering columns and the 
reverse of that one.
-* otherwise, the orderings allowed are the order of the @CLUSTERING ORDER@ 
option and the reversed one.
-
-h4(#selectGroupBy). @<group-by>@
-
-The @GROUP BY@ option allows to condense into a single row all selected rows 
that share the same values for a set of columns.
-
-Using the @GROUP BY@ option, it is only possible to group rows at the 
partition key level or at a clustering column level. By consequence, the @GROUP 
BY@ option only accept as arguments primary key column names in the primary key 
order. If a primary key column is restricted by an equality restriction it is 
not required to be present in the @GROUP BY@ clause. The last argument can be a 
monotonic function on the primary key column.
-
-Aggregate functions will produce a separate value for each group. If no @GROUP 
BY@ clause is specified, aggregates functions will produce a single value for 
all the rows.
-
-If a column is selected without an aggregate function, in a statement with a 
@GROUP BY@, the first value encounter in each group will be returned.
-
-h4(#selectLimit). @LIMIT@ and @PER PARTITION LIMIT@
-
-The @LIMIT@ option to a @SELECT@ statement limits the number of rows returned 
by a query, while the @PER PARTITION LIMIT@ option limits the number of rows 
returned for a given partition by the query. Note that both type of limit can 
used in the same statement.
-
-h4(#selectAllowFiltering). @ALLOW FILTERING@
-
-By default, CQL only allows select queries that don't involve "filtering" 
server side, i.e. queries where we know that all (live) record read will be 
returned (maybe partly) in the result set. The reasoning is that those "non 
filtering" queries have predictable performance in the sense that they will 
execute in a time that is proportional to the amount of data *returned* by the 
query (which can be controlled through @LIMIT@).
-
-The @ALLOW FILTERING@ option allows to explicitly allow (some) queries that 
require filtering. Please note that a query using @ALLOW FILTERING@ may thus 
have unpredictable performance (for the definition above), i.e. even a query 
that selects a handful of records *may* exhibit performance that depends on the 
total amount of data stored in the cluster.
-
-For instance, considering the following table holding user profiles with their 
year of birth (with a secondary index on it) and country of residence:
-
-bc(sample).. 
-CREATE TABLE users (
-    username text PRIMARY KEY,
-    firstname text,
-    lastname text,
-    birth_year int,
-    country text
-)
-
-CREATE INDEX ON users(birth_year);
-p. 
-
-Then the following queries are valid:
-
-bc(sample). 
-SELECT * FROM users;
-SELECT firstname, lastname FROM users WHERE birth_year = 1981;
-
-because in both case, Cassandra guarantees that these queries performance will 
be proportional to the amount of data returned. In particular, if no users are 
born in 1981, then the second query performance will not depend of the number 
of user profile stored in the database (not directly at least: due to secondary 
index implementation consideration, this query may still depend on the number 
of node in the cluster, which indirectly depends on the amount of data stored.  
Nevertheless, the  [...]
-
-However, the following query will be rejected:
-
-bc(sample). 
-SELECT firstname, lastname FROM users WHERE birth_year = 1981 AND country = 
'FR';
-
-because Cassandra cannot guarantee that it won't have to scan large amount of 
data even if the result to those query is small. Typically, it will scan all 
the index entries for users born in 1981 even if only a handful are actually 
from France. However, if you "know what you are doing", you can force the 
execution of this query by using @ALLOW FILTERING@ and so the following query 
is valid:
-
-bc(sample). 
-SELECT firstname, lastname FROM users WHERE birth_year = 1981 AND country = 
'FR' ALLOW FILTERING;
-
-h2(#databaseRoles). Database Roles
-
-h3(#createRoleStmt). CREATE ROLE
-
-__Syntax:__
-
-bc(syntax).. 
-<create-role-stmt> ::= CREATE ROLE ( IF NOT EXISTS )? <identifier> ( WITH 
<option> ( AND <option> )* )?
-
-<option> ::= PASSWORD = <string>
-           | LOGIN = <boolean>
-           | SUPERUSER = <boolean>
-           | OPTIONS = <map_literal>
-p. 
-
-__Sample:__
-
-bc(sample). 
-CREATE ROLE new_role;
-CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true;
-CREATE ROLE bob WITH PASSWORD = 'password_b' AND LOGIN = true AND SUPERUSER = 
true;
-CREATE ROLE carlos WITH OPTIONS = { 'custom_option1' : 'option1_value', 
'custom_option2' : 99 };
-
-By default roles do not possess @LOGIN@ privileges or @SUPERUSER@ status.
-
-"Permissions":#permissions on database resources are granted to roles; types 
of resources include keyspaces, tables, functions and roles themselves. Roles 
may be granted to other roles to create hierarchical permissions structures; in 
these hierarchies, permissions and @SUPERUSER@ status are inherited, but the 
@LOGIN@ privilege is not. 
-
-If a role has the @LOGIN@ privilege, clients may identify as that role when 
connecting. For the duration of that connection, the client will acquire any 
roles and privileges granted to that role.
-
-Only a client with with the @CREATE@ permission on the database roles resource 
may issue @CREATE ROLE@ requests (see the "relevant section":#permissions 
below), unless the client is a @SUPERUSER@. Role management in Cassandra is 
pluggable and custom implementations may support only a subset of the listed 
options.
-
-Role names should be quoted if they contain non-alphanumeric characters. 
-
-h4(#createRolePwd). Setting credentials for internal authentication
-
-Use the @WITH PASSWORD@ clause to set a password for internal authentication, 
enclosing the password in single quotation marks.
-If internal authentication has not been set up or the role does not have 
@LOGIN@ privileges, the @WITH PASSWORD@ clause is not necessary.
-
-h4(#createRoleConditional). Creating a role conditionally
-
-Attempting to create an existing role results in an invalid query condition 
unless the @IF NOT EXISTS@ option is used. If the option is used and the role 
exists, the statement is a no-op.
-
-bc(sample). 
-CREATE ROLE other_role;
-CREATE ROLE IF NOT EXISTS other_role;
-
-h3(#alterRoleStmt). ALTER ROLE
-
-__Syntax:__
-
-bc(syntax).. 
-<alter-role-stmt> ::= ALTER ROLE <identifier> ( WITH <option> ( AND <option> 
)* )?
-
-<option> ::= PASSWORD = <string>
-           | LOGIN = <boolean>
-           | SUPERUSER = <boolean>
-           | OPTIONS = <map_literal>
-p. 
-
-__Sample:__
-
-bc(sample). 
-ALTER ROLE bob WITH PASSWORD = 'PASSWORD_B' AND SUPERUSER = false;
-
-Conditions on executing @ALTER ROLE@ statements:
-
-* A client must have @SUPERUSER@ status to alter the @SUPERUSER@ status of 
another role
-* A client cannot alter the @SUPERUSER@ status of any role it currently holds
-* A client can only modify certain properties of the role with which it 
identified at login (e.g. @PASSWORD@)
-* To modify properties of a role, the client must be granted @ALTER@ 
"permission":#permissions on that role
-
-
-h3(#dropRoleStmt). DROP ROLE
-
-__Syntax:__
-
-bc(syntax).. 
-<drop-role-stmt> ::= DROP ROLE ( IF EXISTS )? <identifier>
-p. 
-
-__Sample:__
-
-bc(sample). 
-DROP ROLE alice;
-DROP ROLE IF EXISTS bob;
-
-
-@DROP ROLE@ requires the client to have @DROP@ "permission":#permissions on 
the role in question. In addition, client may not @DROP@ the role with which it 
identified at login. Finaly, only a client with @SUPERUSER@ status may @DROP@ 
another @SUPERUSER@ role.
-Attempting to drop a role which does not exist results in an invalid query 
condition unless the @IF EXISTS@ option is used. If the option is used and the 
role does not exist the statement is a no-op. 
-
-
-h3(#grantRoleStmt). GRANT ROLE
-
-__Syntax:__
-
-bc(syntax). 
-<grant-role-stmt> ::= GRANT <identifier> TO <identifier>
-
-
-__Sample:__
-
-bc(sample). 
-GRANT report_writer TO alice;
-
-This statement grants the @report_writer@ role to @alice@. Any permissions 
granted to @report_writer@ are also acquired by @alice@.
-Roles are modelled as a directed acyclic graph, so circular grants are not 
permitted. The following examples result in error conditions:
-
-bc(sample). 
-GRANT role_a TO role_b;
-GRANT role_b TO role_a;
-
-bc(sample). 
-GRANT role_a TO role_b;
-GRANT role_b TO role_c;
-GRANT role_c TO role_a;
-
-
-h3(#revokeRoleStmt). REVOKE ROLE
-
-__Syntax:__
-
-bc(syntax). 
-<revoke-role-stmt> ::= REVOKE <identifier> FROM <identifier>
-
-__Sample:__
-
-bc(sample). 
-REVOKE report_writer FROM alice;
-
-This statement revokes the @report_writer@ role from @alice@. Any permissions 
that @alice@ has acquired via the @report_writer@ role are also revoked. 
-
-h4(#listRolesStmt). LIST ROLES
-
-__Syntax:__
-
-bc(syntax). 
-<list-roles-stmt> ::= LIST ROLES ( OF <identifier> )? ( NORECURSIVE )?
-
-__Sample:__ 
-
-bc(sample). 
-LIST ROLES;
-
-Return all known roles in the system, this requires @DESCRIBE@ permission on 
the database roles resource.
-
-bc(sample). 
-LIST ROLES OF @alice@;
-
-Enumerate all roles granted to @alice@, including those transitively aquired.
-
-bc(sample). 
-LIST ROLES OF @bob@ NORECURSIVE
-
-List all roles directly granted to @bob@.
-
-h3(#createUserStmt). CREATE USER 
-
-Prior to the introduction of roles in Cassandra 2.2, authentication and 
authorization were based around the concept of a @USER@. For backward 
compatibility, the legacy syntax has been preserved with @USER@ centric 
statments becoming synonyms for the @ROLE@ based equivalents.
-
-__Syntax:__ 
-
-bc(syntax).. 
-<create-user-statement> ::= CREATE USER ( IF NOT EXISTS )? <identifier> ( WITH 
PASSWORD <string> )? (<option>)?
-
-<option> ::= SUPERUSER
-           | NOSUPERUSER
-p. 
-
-__Sample:__ 
-
-bc(sample). 
-CREATE USER alice WITH PASSWORD 'password_a' SUPERUSER;
-CREATE USER bob WITH PASSWORD 'password_b' NOSUPERUSER;
-
-@CREATE USER@ is equivalent to @CREATE ROLE@ where the @LOGIN@ option is 
@true@. So, the following pairs of statements are equivalent:
-
-bc(sample).. 
-CREATE USER alice WITH PASSWORD 'password_a' SUPERUSER;
-CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true AND SUPERUSER 
= true;
-
-CREATE USER IF NOT EXISTS alice WITH PASSWORD 'password_a' SUPERUSER;
-CREATE ROLE IF NOT EXISTS alice WITH PASSWORD = 'password_a' AND LOGIN = true 
AND SUPERUSER = true;
-
-CREATE USER alice WITH PASSWORD 'password_a' NOSUPERUSER;
-CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true AND SUPERUSER 
= false;
-
-CREATE USER alice WITH PASSWORD 'password_a' NOSUPERUSER;
-CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true;
-
-CREATE USER alice WITH PASSWORD 'password_a';
-CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true;
-p. 
-
-h3(#alterUserStmt). ALTER USER 
-
-__Syntax:__ 
-
-bc(syntax).. 
-<alter-user-statement> ::= ALTER USER <identifier> ( WITH PASSWORD <string> )? 
( <option> )?
-
-<option> ::= SUPERUSER
-           | NOSUPERUSER
-p. 
-
-bc(sample). 
-ALTER USER alice WITH PASSWORD 'PASSWORD_A';
-ALTER USER bob SUPERUSER;
-
-h3(#dropUserStmt). DROP USER 
-
-__Syntax:__ 
-
-bc(syntax).. 
-<drop-user-stmt> ::= DROP USER ( IF EXISTS )? <identifier>
-p. 
-
-__Sample:__ 
-
-bc(sample). 
-DROP USER alice;
-DROP USER IF EXISTS bob;
-
-h3(#listUsersStmt). LIST USERS
-
-__Syntax:__
-
-bc(syntax). 
-<list-users-stmt> ::= LIST USERS;
-
-__Sample:__
-
-bc(sample). 
-LIST USERS;
-
-This statement is equivalent to
-
-bc(sample). 
-LIST ROLES;
-
-but only roles with the @LOGIN@ privilege are included in the output.
-
-
-h2(#dataControl). Data Control
-
-h3(#permissions). Permissions 
-
-Permissions on resources are granted to roles; there are several different 
types of resources in Cassandra and each type is modelled hierarchically:
-
-* The hierarchy of Data resources, Keyspaces and Tables has the structure @ALL 
KEYSPACES@ -> @KEYSPACE@ -> @TABLE@
-* Function resources have the structure @ALL FUNCTIONS@ -> @KEYSPACE@ -> 
@FUNCTION@
-* Resources representing roles have the structure @ALL ROLES@ -> @ROLE@
-* Resources representing JMX ObjectNames, which map to sets of MBeans/MXBeans, 
have the structure @ALL MBEANS@ -> @MBEAN@
-
-Permissions can be granted at any level of these hierarchies and they flow 
downwards. So granting a permission on a resource higher up the chain 
automatically grants that same permission on all resources lower down. For 
example, granting @SELECT@ on a @KEYSPACE@ automatically grants it on all 
@TABLES@ in that @KEYSPACE@. Likewise, granting a permission on @ALL FUNCTIONS@ 
grants it on every defined function, regardless of which keyspace it is scoped 
in. It is also possible to grant permis [...]
-
-Modifications to permissions are visible to existing client sessions; that is, 
connections need not be re-established following permissions changes.
-
-The full set of available permissions is:
-* @CREATE@
-* @ALTER@
-* @DROP@
-* @SELECT@
-* @MODIFY@
-* @AUTHORIZE@
-* @DESCRIBE@
-* @EXECUTE@
-
-Not all permissions are applicable to every type of resource. For instance, 
@EXECUTE@ is only relevant in the context of functions or mbeans; granting 
@EXECUTE@ on a resource representing a table is nonsensical. Attempting to 
@GRANT@ a permission on resource to which it cannot be applied results in an 
error response. The following illustrates which permissions can be granted on 
which types of resource, and which statements are enabled by that permission.
-
-|_. permission |_. resource                   |_. operations        |
-| @CREATE@     | @ALL KEYSPACES@              |@CREATE KEYSPACE@ ==<br>== 
@CREATE TABLE@ in any keyspace|
-| @CREATE@     | @KEYSPACE@                   |@CREATE TABLE@ in specified 
keyspace|
-| @CREATE@     | @ALL FUNCTIONS@              |@CREATE FUNCTION@ in any 
keyspace ==<br>== @CREATE AGGREGATE@ in any keyspace|
-| @CREATE@     | @ALL FUNCTIONS IN KEYSPACE@  |@CREATE FUNCTION@ in keyspace 
==<br>== @CREATE AGGREGATE@ in keyspace|
-| @CREATE@     | @ALL ROLES@                  |@CREATE ROLE@|
-| @ALTER@      | @ALL KEYSPACES@              |@ALTER KEYSPACE@ ==<br>== 
@ALTER TABLE@ in any keyspace|
-| @ALTER@      | @KEYSPACE@                   |@ALTER KEYSPACE@ ==<br>== 
@ALTER TABLE@ in keyspace|
-| @ALTER@      | @TABLE@                      |@ALTER TABLE@|
-| @ALTER@      | @ALL FUNCTIONS@              |@CREATE FUNCTION@ replacing any 
existing ==<br>== @CREATE AGGREGATE@ replacing any existing|
-| @ALTER@      | @ALL FUNCTIONS IN KEYSPACE@  |@CREATE FUNCTION@ replacing 
existing in keyspace ==<br>== @CREATE AGGREGATE@ replacing any existing in 
keyspace|
-| @ALTER@      | @FUNCTION@                   |@CREATE FUNCTION@ replacing 
existing ==<br>== @CREATE AGGREGATE@ replacing existing|
-| @ALTER@      | @ALL ROLES@                  |@ALTER ROLE@ on any role|
-| @ALTER@      | @ROLE@                       |@ALTER ROLE@|
-| @DROP@       | @ALL KEYSPACES@              |@DROP KEYSPACE@ ==<br>== @DROP 
TABLE@ in any keyspace|
-| @DROP@       | @KEYSPACE@                   |@DROP TABLE@ in specified 
keyspace|
-| @DROP@       | @TABLE@                      |@DROP TABLE@|
-| @DROP@       | @ALL FUNCTIONS@              |@DROP FUNCTION@ in any keyspace 
==<br>== @DROP AGGREGATE@ in any existing|
-| @DROP@       | @ALL FUNCTIONS IN KEYSPACE@  |@DROP FUNCTION@ in keyspace 
==<br>== @DROP AGGREGATE@ in existing|
-| @DROP@       | @FUNCTION@                   |@DROP FUNCTION@|
-| @DROP@       | @ALL ROLES@                  |@DROP ROLE@ on any role|
-| @DROP@       | @ROLE@                       |@DROP ROLE@|
-| @SELECT@     | @ALL KEYSPACES@              |@SELECT@ on any table|
-| @SELECT@     | @KEYSPACE@                   |@SELECT@ on any table in 
keyspace|
-| @SELECT@     | @TABLE@                      |@SELECT@ on specified table|
-| @SELECT@     | @ALL MBEANS@                 |Call getter methods on any 
mbean|
-| @SELECT@     | @MBEANS@                     |Call getter methods on any 
mbean matching a wildcard pattern|
-| @SELECT@     | @MBEAN@                      |Call getter methods on named 
mbean|
-| @MODIFY@     | @ALL KEYSPACES@              |@INSERT@ on any table ==<br>== 
@UPDATE@ on any table ==<br>== @DELETE@ on any table ==<br>== @TRUNCATE@ on any 
table|
-| @MODIFY@     | @KEYSPACE@                   |@INSERT@ on any table in 
keyspace ==<br>== @UPDATE@ on any table in keyspace ==<br>  == @DELETE@ on any 
table in keyspace ==<br>== @TRUNCATE@ on any table in keyspace
-| @MODIFY@     | @TABLE@                      |@INSERT@ ==<br>== @UPDATE@ 
==<br>== @DELETE@ ==<br>== @TRUNCATE@|
-| @MODIFY@     | @ALL MBEANS@                 |Call setter methods on any 
mbean|
-| @MODIFY@     | @MBEANS@                     |Call setter methods on any 
mbean matching a wildcard pattern|
-| @MODIFY@     | @MBEAN@                      |Call setter methods on named 
mbean|
-| @AUTHORIZE@  | @ALL KEYSPACES@              |@GRANT PERMISSION@ on any table 
==<br>== @REVOKE PERMISSION@ on any table|
-| @AUTHORIZE@  | @KEYSPACE@                   |@GRANT PERMISSION@ on table in 
keyspace ==<br>== @REVOKE PERMISSION@ on table in keyspace|
-| @AUTHORIZE@  | @TABLE@                      |@GRANT PERMISSION@ ==<br>== 
@REVOKE PERMISSION@ |
-| @AUTHORIZE@  | @ALL FUNCTIONS@              |@GRANT PERMISSION@ on any 
function ==<br>== @REVOKE PERMISSION@ on any function|
-| @AUTHORIZE@  | @ALL FUNCTIONS IN KEYSPACE@  |@GRANT PERMISSION@ in keyspace 
==<br>== @REVOKE PERMISSION@ in keyspace|
-| @AUTHORIZE@  | @ALL FUNCTIONS IN KEYSPACE@  |@GRANT PERMISSION@ in keyspace 
==<br>== @REVOKE PERMISSION@ in keyspace|
-| @AUTHORIZE@  | @FUNCTION@                   |@GRANT PERMISSION@ ==<br>== 
@REVOKE PERMISSION@|
-| @AUTHORIZE@  | @ALL MBEANS@                 |@GRANT PERMISSION@ on any mbean 
==<br>== @REVOKE PERMISSION@ on any mbean|
-| @AUTHORIZE@  | @MBEANS@                     |@GRANT PERMISSION@ on any mbean 
matching a wildcard pattern ==<br>== @REVOKE PERMISSION@ on any mbean matching 
a wildcard pattern|
-| @AUTHORIZE@  | @MBEAN@                      |@GRANT PERMISSION@ on named 
mbean ==<br>== @REVOKE PERMISSION@ on named mbean|
-| @AUTHORIZE@  | @ALL ROLES@                  |@GRANT ROLE@ grant any role 
==<br>== @REVOKE ROLE@ revoke any role|
-| @AUTHORIZE@  | @ROLES@                      |@GRANT ROLE@ grant role 
==<br>== @REVOKE ROLE@ revoke role|
-| @DESCRIBE@   | @ALL ROLES@                  |@LIST ROLES@ all roles or only 
roles granted to another, specified role|
-| @DESCRIBE@   | @ALL MBEANS                  |Retrieve metadata about any 
mbean from the platform's MBeanServer|
-| @DESCRIBE@   | @MBEANS                      |Retrieve metadata about any 
mbean matching a wildcard patter from the platform's MBeanServer|
-| @DESCRIBE@   | @MBEAN                       |Retrieve metadata about a named 
mbean from the platform's MBeanServer|
-| @EXECUTE@    | @ALL FUNCTIONS@              |@SELECT@, @INSERT@, @UPDATE@ 
using any function ==<br>== use of any function in @CREATE AGGREGATE@|
-| @EXECUTE@    | @ALL FUNCTIONS IN KEYSPACE@  |@SELECT@, @INSERT@, @UPDATE@ 
using any function in keyspace ==<br>== use of any function in keyspace in 
@CREATE AGGREGATE@|
-| @EXECUTE@    | @FUNCTION@                   |@SELECT@, @INSERT@, @UPDATE@ 
using function ==<br>== use of function in @CREATE AGGREGATE@|
-| @EXECUTE@    | @ALL MBEANS@                 |Execute operations on any mbean|
-| @EXECUTE@    | @MBEANS@                     |Execute operations on any mbean 
matching a wildcard pattern|
-| @EXECUTE@    | @MBEAN@                      |Execute operations on named 
mbean|
-
-
-h3(#grantPermissionsStmt). GRANT PERMISSION
-
-__Syntax:__ 
-
-bc(syntax).. 
-<grant-permission-stmt> ::= GRANT ( ALL ( PERMISSIONS )? | <permission> ( 
PERMISSION )? ) ON <resource> TO <identifier>
-
-<permission> ::= CREATE | ALTER | DROP | SELECT | MODIFY | AUTHORIZE | DESRIBE 
| EXECUTE
-
-<resource> ::= ALL KEYSPACES
-             | KEYSPACE <identifier>
-             | ( TABLE )? <tablename>
-             | ALL ROLES
-             | ROLE <identifier>
-             | ALL FUNCTIONS ( IN KEYSPACE <identifier> )?
-             | FUNCTION <functionname>
-             | ALL MBEANS
-             | ( MBEAN | MBEANS ) <objectName>
-p. 
-
-__Sample:__ 
-
-bc(sample). 
-GRANT SELECT ON ALL KEYSPACES TO data_reader;
-
-This gives any user with the role @data_reader@ permission to execute @SELECT@ 
statements on any table across all keyspaces
-
-bc(sample). 
-GRANT MODIFY ON KEYSPACE keyspace1 TO data_writer;
-
-This give any user with the role @data_writer@ permission to perform @UPDATE@, 
@INSERT@, @UPDATE@, @DELETE@ and @TRUNCATE@ queries on all tables in the 
@keyspace1@ keyspace
-
-bc(sample). 
-GRANT DROP ON keyspace1.table1 TO schema_owner;
-
-This gives any user with the @schema_owner@ role permissions to @DROP@ 
@keyspace1.table1@.
-
-bc(sample). 
-GRANT EXECUTE ON FUNCTION keyspace1.user_function( int ) TO report_writer;
-
-This grants any user with the @report_writer@ role permission to execute 
@SELECT@, @INSERT@ and @UPDATE@ queries which use the function 
@keyspace1.user_function( int )@
-
-bc(sample). 
-GRANT DESCRIBE ON ALL ROLES TO role_admin;
-
-This grants any user with the @role_admin@ role permission to view any and all 
roles in the system with a @LIST ROLES@ statement
-
-h4(#grantAll). GRANT ALL 
-
-When the @GRANT ALL@ form is used, the appropriate set of permissions is 
determined automatically based on the target resource.
-
-h4(#autoGrantPermissions). Automatic Granting
-
-When a resource is created, via a @CREATE KEYSPACE@, @CREATE TABLE@, @CREATE 
FUNCTION@, @CREATE AGGREGATE@ or @CREATE ROLE@ statement, the creator (the role 
the database user who issues the statement is identified as), is automatically 
granted all applicable permissions on the new resource.
-
-
-h3(#revokePermissionsStmt). REVOKE PERMISSION
-
-__Syntax:__ 
-
-bc(syntax).. 
-<revoke-permission-stmt> ::= REVOKE ( ALL ( PERMISSIONS )? | <permission> ( 
PERMISSION )? ) ON <resource> FROM <identifier>
-
-<permission> ::= CREATE | ALTER | DROP | SELECT | MODIFY | AUTHORIZE | DESRIBE 
| EXECUTE
-
-<resource> ::= ALL KEYSPACES
-             | KEYSPACE <identifier>
-             | ( TABLE )? <tablename>
-             | ALL ROLES
-             | ROLE <identifier>
-             | ALL FUNCTIONS ( IN KEYSPACE <identifier> )?
-             | FUNCTION <functionname>
-             | ALL MBEANS
-             | ( MBEAN | MBEANS ) <objectName>
-p.
-
-__Sample:__ 
-
-bc(sample).. 
-REVOKE SELECT ON ALL KEYSPACES FROM data_reader;
-REVOKE MODIFY ON KEYSPACE keyspace1 FROM data_writer;
-REVOKE DROP ON keyspace1.table1 FROM schema_owner;
-REVOKE EXECUTE ON FUNCTION keyspace1.user_function( int ) FROM report_writer;
-REVOKE DESCRIBE ON ALL ROLES FROM role_admin;
-p. 
-
-h4(#listPermissionsStmt). LIST PERMISSIONS
-
-__Syntax:__
-
-bc(syntax).. 
-<list-permissions-stmt> ::= LIST ( ALL ( PERMISSIONS )? | <permission> ) 
-                                 ( ON <resource> )? 
-                                 ( OF <identifier> ( NORECURSIVE )? )?
-
-<resource> ::= ALL KEYSPACES
-             | KEYSPACE <identifier>
-             | ( TABLE )? <tablename>
-             | ALL ROLES
-             | ROLE <identifier>
-             | ALL FUNCTIONS ( IN KEYSPACE <identifier> )?
-             | FUNCTION <functionname>
-             | ALL MBEANS
-             | ( MBEAN | MBEANS ) <objectName>
-p.
-
-__Sample:__
-
-bc(sample). 
-LIST ALL PERMISSIONS OF alice;
-
-Show all permissions granted to @alice@, including those acquired transitively 
from any other roles. 
-
-bc(sample). 
-LIST ALL PERMISSIONS ON keyspace1.table1 OF bob;
-
-Show all permissions on @keyspace1.table1@ granted to @bob@, including those 
acquired transitively from any other roles. This also includes any permissions 
higher up the resource hierarchy which can be applied to @keyspace1.table1@. 
For example, should @bob@ have @ALTER@ permission on @keyspace1@, that would be 
included in the results of this query. Adding the @NORECURSIVE@ switch 
restricts the results to only those permissions which were directly granted to 
@bob@ or one of @bob@'s roles.
-
-bc(sample). 
-LIST SELECT PERMISSIONS OF carlos;
-
-Show any permissions granted to @carlos@ or any of @carlos@'s roles, limited 
to @SELECT@ permissions on any resource.
-
-h2(#types). Data Types
-
-CQL supports a rich set of data types for columns defined in a table, 
including collection types. On top of those native
-and collection types, users can also provide custom types (through a JAVA 
class extending @AbstractType@ loadable by
-Cassandra). The syntax of types is thus:
-
-bc(syntax).. 
-<type> ::= <native-type>
-         | <collection-type>
-         | <tuple-type>
-         | <string>       // Used for custom types. The fully-qualified name 
of a JAVA class
-
-<native-type> ::= ascii
-                | bigint
-                | blob
-                | boolean
-                | counter
-                | date
-                | decimal
-                | double
-                | float
-                | inet
-                | int
-                | smallint
-                | text
-                | time
-                | timestamp
-                | timeuuid
-                | tinyint
-                | uuid
-                | varchar
-                | varint
-
-<collection-type> ::= list '<' <native-type> '>'
-                    | set  '<' <native-type> '>'
-                    | map  '<' <native-type> ',' <native-type> '>'
-<tuple-type> ::= tuple '<' <type> (',' <type>)* '>'
-p. Note that the native types are keywords and as such are case-insensitive. 
They are however not reserved ones.
-
-p. The following table gives additional informations on the native data types, 
and on which kind of "constants":#constants each type supports:
-
-|_. type    |_. constants supported|_. description|
-|@ascii@    |   strings            |ASCII character string|
-|@bigint@   |   integers           |64-bit signed long|
-|@blob@     |   blobs              |Arbitrary bytes (no validation)|
-|@boolean@  |   booleans           |true or false|
-|@counter@  |   integers           |Counter column (64-bit signed value). See 
"Counters":#counters for details|
-|@date@     |   integers, strings  |A date (with no corresponding time value). 
 See "Working with dates":#usingdates below for more information.|
-|@decimal@  |   integers, floats   |Variable-precision decimal|
-|@double@   |   integers           |64-bit IEEE-754 floating point|
-|@float@    |   integers, floats   |32-bit IEEE-754 floating point|
-|@inet@     |   strings            |An IP address. It can be either 4 bytes 
long (IPv4) or 16 bytes long (IPv6). There is no @inet@ constant, IP address 
should be inputed as strings|
-|@int@      |   integers           |32-bit signed int|
-|@smallint@ |   integers           |16-bit signed int|
-|@text@     |   strings            |UTF8 encoded string|
-|@time@     |   integers, strings  |A time with nanosecond precision.  See 
"Working with time":#usingtime below for more information.|
-|@timestamp@|   integers, strings  |A timestamp. Strings constant are allow to 
input timestamps as dates, see "Working with timestamps":#usingtimestamps below 
for more information.|
-|@timeuuid@ |   uuids              |Type 1 UUID. This is generally used as a 
"conflict-free" timestamp. Also see the "functions on Timeuuid":#timeuuidFun|
-|@tinyint@  |   integers           |8-bit signed int|
-|@uuid@     |   uuids              |Type 1 or type 4 UUID|
-|@varchar@  |   strings            |UTF8 encoded string|
-|@varint@   |   integers           |Arbitrary-precision integer|
-
-For more information on how to use the collection types, see the "Working with 
collections":#collections section below.
-
-h3(#usingtimestamps). Working with timestamps
-
-Values of the @timestamp@ type are encoded as 64-bit signed integers 
representing a number of milliseconds since the standard base time known as 
"the epoch": January 1 1970 at 00:00:00 GMT.
-
-Timestamp can be input in CQL as simple long integers, giving the number of 
milliseconds since the epoch, as defined above.
-
-They can also be input as string literals in any of the following ISO 8601 
formats, each representing the time and date Mar 2, 2011, at 04:05:00 AM, GMT.:
-
-* @2011-02-03 04:05+0000@
-* @2011-02-03 04:05:00+0000@
-* @2011-02-03 04:05:00.000+0000@
-* @2011-02-03T04:05+0000@
-* @2011-02-03T04:05:00+0000@
-* @2011-02-03T04:05:00.000+0000@
-
-The @+0000@ above is an RFC 822 4-digit time zone specification; @+0000@ 
refers to GMT. US Pacific Standard Time is @-0800@. The time zone may be 
omitted if desired-- the date will be interpreted as being in the time zone 
under which the coordinating Cassandra node is configured.
-
-* @2011-02-03 04:05@
-* @2011-02-03 04:05:00@
-* @2011-02-03 04:05:00.000@
-* @2011-02-03T04:05@
-* @2011-02-03T04:05:00@
-* @2011-02-03T04:05:00.000@
-
-There are clear difficulties inherent in relying on the time zone 
configuration being as expected, though, so it is recommended that the time 
zone always be specified for timestamps when feasible.
-
-The time of day may also be omitted, if the date is the only piece that 
matters:
-
-* @2011-02-03@
-* @2011-02-03+0000@
-
-In that case, the time of day will default to 00:00:00, in the specified or 
default time zone.
-
-
-h3(#usingdates). Working with dates
-
-Values of the @date@ type are encoded as 32-bit unsigned integers representing 
a number of days with "the epoch" at the center of the range (2^31). Epoch is 
January 1st, 1970
-
-A date can be input in CQL as an unsigned integer as defined above.
-
-They can also be input as string literals in the following format:
-
-* @2014-01-01@
-
-
-h3(#usingtime). Working with time
-
-Values of the @time@ type are encoded as 64-bit signed integers representing 
the number of nanoseconds since midnight.
-
-A time can be input in CQL as simple long integers, giving the number of 
nanoseconds since midnight.
-
-They can also be input as string literals in any of the following formats:
-
-* @08:12:54@
-* @08:12:54.123@
-* @08:12:54.123456@
-* @08:12:54.123456789@
-
-
-h3(#counters). Counters
-
-The @counter@ type is used to define _counter columns_. A counter column is a 
column whose value is a 64-bit signed integer and on which 2 operations are 
supported: incrementation and decrementation (see "@UPDATE@":#updateStmt for 
syntax).  Note the value of a counter cannot be set. A counter doesn't exist 
until first incremented/decremented, and the first 
incrementation/decrementation is made as if the previous value was 0. Deletion 
of counter columns is supported but have some limitati [...]
-
-The use of the counter type is limited in the following way:
-* It cannot be used for column that is part of the @PRIMARY KEY@ of a table.
-* A table that contains a counter can only contain counters. In other words, 
either all the columns of a table outside the @PRIMARY KEY@ have the counter 
type, or none of them have it.
-
-h3(#collections). Working with collections
-
-h4. Noteworthy characteristics
-
-Collections are meant for storing/denormalizing relatively small amount of 
data. They work well for things like "the phone numbers of a given user", 
"labels applied to an email", etc. But when items are expected to grow 
unbounded ("all the messages sent by a given user", "events registered by a 
sensor", ...), then collections are not appropriate anymore and a specific 
table (with clustering columns) should be used. Concretely, collections have 
the following limitations:
-* Collections are always read in their entirety (and reading one is not paged 
internally).
-* Collections cannot have more than 65535 elements. More precisely, while it 
may be possible to insert more than 65535 elements, it is not possible to read 
more than the 65535 first elements (see 
"CASSANDRA-5428":https://issues.apache.org/jira/browse/CASSANDRA-5428 for 
details).
-* While insertion operations on sets and maps never incur a read-before-write 
internally, some operations on lists do (see the section on lists below for 
details). It is thus advised to prefer sets over lists when possible.
-
-Please note that while some of those limitations may or may not be loosen in 
the future, the general rule that collections are for denormalizing small 
amount of data is meant to stay.
-
-
-h4(#map). Maps
-
-A @map@ is a "typed":#types set of key-value pairs, where keys are unique. 
Furthermore, note that the map are internally sorted by their keys and will 
thus always be returned in that order. To create a column of type @map@, use 
the @map@ keyword suffixed with comma-separated key and value types, enclosed 
in angle brackets.  For example:
-
-bc(sample). 
-CREATE TABLE users (
-    id text PRIMARY KEY,
-    given text,
-    surname text,
-    favs map<text, text>   // A map of text keys, and text values
-)
-
-Writing @map@ data is accomplished with a JSON-inspired syntax. To write a 
record using @INSERT@, specify the entire map as a JSON-style associative 
array. _Note: This form will always replace the entire map._
-
-bc(sample). 
-// Inserting (or Updating)
-INSERT INTO users (id, given, surname, favs)
-           VALUES ('jsmith', 'John', 'Smith', { 'fruit' : 'apple', 'band' : 
'Beatles' })
-
-Adding or updating key-values of a (potentially) existing map can be 
accomplished either by subscripting the map column in an @UPDATE@ statement or 
by adding a new map literal:
-
-bc(sample). 
-// Updating (or inserting)
-UPDATE users SET favs['author'] = 'Ed Poe' WHERE id = 'jsmith'
-UPDATE users SET favs = favs +  { 'movie' : 'Cassablanca' } WHERE id = 'jsmith'
-
-Note that TTLs are allowed for both @INSERT@ and @UPDATE@, but in both case 
the TTL set only apply to the newly inserted/updated _values_. In other words,
-
-bc(sample). 
-// Updating (or inserting)
-UPDATE users USING TTL 10 SET favs['color'] = 'green' WHERE id = 'jsmith'
-
-will only apply the TTL to the @{ 'color' : 'green' }@ record, the rest of the 
map remaining unaffected.
-
-Deleting a map record is done with:
-
-bc(sample). 
-DELETE favs['author'] FROM users WHERE id = 'jsmith'
-
-h4(#set). Sets
-
-A @set@ is a "typed":#types collection of unique values. Sets are ordered by 
their values. To create a column of type @set@, use the @set@ keyword suffixed 
with the value type enclosed in angle brackets.  For example:
-
-bc(sample). 
-CREATE TABLE images (
-    name text PRIMARY KEY,
-    owner text,
-    date timestamp,
-    tags set<text>
-);
-
-Writing a @set@ is accomplished by comma separating the set values, and 
enclosing them in curly braces.  _Note: An @INSERT@ will always replace the 
entire set._
-
-bc(sample). 
-INSERT INTO images (name, owner, date, tags)
-            VALUES ('cat.jpg', 'jsmith', 'now', { 'kitten', 'cat', 'pet' });
-
-Adding and removing values of a set can be accomplished with an @UPDATE@ by 
adding/removing new set values to an existing @set@ column.
-
-bc(sample). 
-UPDATE images SET tags = tags + { 'cute', 'cuddly' } WHERE name = 'cat.jpg';
-UPDATE images SET tags = tags - { 'lame' } WHERE name = 'cat.jpg';
-
-As with "maps":#map, TTLs if used only apply to the newly inserted/updated 
_values_.
-
-h4(#list). Lists
-
-A @list@ is a "typed":#types collection of non-unique values where elements 
are ordered by there position in the list.  To create a column of type @list@, 
use the @list@ keyword suffixed with the value type enclosed in angle brackets. 
 For example:
-
-bc(sample). 
-CREATE TABLE plays (
-    id text PRIMARY KEY,
-    game text,
-    players int,
-    scores list<int>
-)
-
-Do note that as explained below, lists have some limitations and performance 
considerations to take into account, and it is advised to prefer "sets":#set 
over lists when this is possible.
-
-Writing @list@ data is accomplished with a JSON-style syntax.  To write a 
record using @INSERT@, specify the entire list as a JSON array.  _Note: An 
@INSERT@ will always replace the entire list._
-
-bc(sample). 
-INSERT INTO plays (id, game, players, scores)
-           VALUES ('123-afde', 'quake', 3, [17, 4, 2]);
-
-Adding (appending or prepending) values to a list can be accomplished by 
adding a new JSON-style array to an existing @list@ column.
-
-bc(sample). 
-UPDATE plays SET players = 5, scores = scores + [ 14, 21 ] WHERE id = 
'123-afde';
-UPDATE plays SET players = 5, scores = [ 12 ] + scores WHERE id = '123-afde';
-
-It should be noted that append and prepend are not idempotent operations. This 
means that if during an append or a prepend the operation timeout, it is not 
always safe to retry the operation (as this could result in the record appended 
or prepended twice).
-
-Lists also provides the following operation: setting an element by its 
position in the list, removing an element by its position in the list and 
remove all the occurrence of a given value in the list. _However, and 
contrarily to all the other collection operations, these three operations 
induce an internal read before the update, and will thus typically have slower 
performance characteristics_. Those operations have the following syntax:
-
-bc(sample). 
-UPDATE plays SET scores[1] = 7 WHERE id = '123-afde';                // sets 
the 2nd element of scores to 7 (raises an error is scores has less than 2 
elements)
-DELETE scores[1] FROM plays WHERE id = '123-afde';                   // 
deletes the 2nd element of scores (raises an error is scores has less than 2 
elements)
-UPDATE plays SET scores = scores - [ 12, 21 ] WHERE id = '123-afde'; // 
removes all occurrences of 12 and 21 from scores
-
-As with "maps":#map, TTLs if used only apply to the newly inserted/updated 
_values_.
-
-
-h2(#functions). Functions
-
-CQL3 distinguishes between built-in functions (so called 'native functions') 
and "user-defined functions":#udfs.  CQL3 includes several native functions, 
described below:
-
-h3(#castFun). Cast
-
-The @cast@ function can be used to convert one native datatype to another.
-
-The following table describes the conversions supported by the @cast@ 
function. Cassandra will silently ignore any cast converting a datatype into 
its own datatype.
-
-|_. from    |_. to   |
-|@ascii@   |@text@, @varchar@                                                  
                                  |
-|@bigint@   |@tinyint@, @smallint@, @int@, @float@, @double@, @decimal@, 
@varint@, @text@, @varchar@             |
-|@boolean@  |@text@, @varchar@                                                 
                                  |
-|@counter@  |@tinyint@, @smallint@, @int@, @bigint@, @float@, @double@, 
@decimal@, @varint@, @text@, @varchar@   |
-|@date@      |@timestamp@                                                      
                                  |
-|@decimal@  |@tinyint@, @smallint@, @int@, @bigint@, @float@, @double@, 
@varint@, @text@, @varchar@              |
-|@double@   |@tinyint@, @smallint@, @int@, @bigint@, @float@, @decimal@, 
@varint@, @text@, @varchar@             |
-|@float@     |@tinyint@, @smallint@, @int@, @bigint@, @double@, @decimal@, 
@varint@, @text@, @varchar@           |
-|@inet@      |@text@, @varchar@                                                
                                  |
-|@int@       |@tinyint@, @smallint@, @bigint@, @float@, @double@, @decimal@, 
@varint@, @text@, @varchar@         |
-|@smallint@ |@tinyint@, @int@, @bigint@, @float@, @double@, @decimal@, 
@varint@, @text@, @varchar@               |
-|@time@      |@text@, @varchar@                                                
                                  |
-|@timestamp@|@date@, @text@, @varchar@                                         
                                  |
-|@timeuuid@ |@timestamp@, @date@, @text@, @varchar@                            
                                  |
-|@tinyint@  |@tinyint@, @smallint@, @int@, @bigint@, @float@, @double@, 
@decimal@, @varint@, @text@, @varchar@   |
-|@uuid@      |@text@, @varchar@                                                
                                  |
-|@varint@   |@tinyint@, @smallint@, @int@, @bigint@, @float@, @double@, 
@decimal@, @text@, @varchar@             |
-
-
-The conversions rely strictly on Java's semantics. For example, the double 
value 1 will be converted to the text value '1.0'.
-
-bc(sample). 
-SELECT avg(cast(count as double)) FROM myTable
-
-h3(#tokenFun). Token
-
-The @token@ function allows to compute the token for a given partition key. 
The exact signature of the token function depends on the table concerned and of 
the partitioner used by the cluster.
-
-The type of the arguments of the @token@ depend on the type of the partition 
key columns. The return type depend on the partitioner in use:
-* For Murmur3Partitioner, the return type is @bigint@.
-* For RandomPartitioner, the return type is @varint@.
-* For ByteOrderedPartitioner, the return type is @blob@.
-
-For instance, in a cluster using the default Murmur3Partitioner, if a table is 
defined by
-
-bc(sample). 
-CREATE TABLE users (
-    userid text PRIMARY KEY,
-    username text,
-    ...
-)
-
-then the @token@ function will take a single argument of type @text@ (in that 
case, the partition key is @userid@ (there is no clustering columns so the 
partition key is the same than the primary key)), and the return type will be 
@bigint@.
-
-h3(#uuidFun). Uuid
-
-The @uuid@ function takes no parameters and generates a random type 4 uuid 
suitable for use in INSERT or SET statements.
-
-h3(#timeuuidFun). Timeuuid functions
-
-h4. @now@
-
-The @now@ function takes no arguments and generates, on the coordinator node, 
a new unique timeuuid (at the time where the statement using it is executed). 
Note that this method is useful for insertion but is largely non-sensical in 
@WHERE@ clauses. For instance, a query of the form
-
-bc(sample). 
-SELECT * FROM myTable WHERE t = now()
-
-
-will never return any result by design, since the value returned by @now()@ is 
guaranteed to be unique.
-
-h4. @minTimeuuid@ and @maxTimeuuid@
-
-The @minTimeuuid@ (resp. @maxTimeuuid@) function takes a @timestamp@ value @t@ 
(which can be "either a timestamp or a date string":#usingtimestamps ) and 
return a _fake_ @timeuuid@ corresponding to the _smallest_ (resp. _biggest_) 
possible @timeuuid@ having for timestamp @t@. So for instance:
-
-
-bc(sample). 
-SELECT * FROM myTable WHERE t > maxTimeuuid('2013-01-01 00:05+0000') AND t < 
minTimeuuid('2013-02-02 10:00+0000')
-
-
-will select all rows where the @timeuuid@ column @t@ is strictly older than 
'2013-01-01 00:05+0000' but strictly younger than '2013-02-02 10:00+0000'.  
Please note that @t >= maxTimeuuid('2013-01-01 00:05+0000')@ would still _not_ 
select a @timeuuid@ generated exactly at '2013-01-01 00:05+0000' and is 
essentially equivalent to @t > maxTimeuuid('2013-01-01 00:05+0000')@.
-
-_Warning_: We called the values generated by @minTimeuuid@ and @maxTimeuuid@ 
_fake_ UUID because they do no respect the Time-Based UUID generation process 
specified by the "RFC 4122":http://www.ietf.org/rfc/rfc4122.txt. In particular, 
the value returned by these 2 methods will not be unique. This means you should 
only use those methods for querying (as in the example above). Inserting the 
result of those methods is almost certainly _a bad idea_.
-
-h3(#timeFun). Time conversion functions
-
-A number of functions are provided to "convert" a @timeuuid@, a @timestamp@ or 
a @date@ into another @native@ type.
-
-|_. function name    |_. input type   |_. description|
-|@toDate@            |@timeuuid@      |Converts the @timeuuid@ argument into a 
@date@ type|
-|@toDate@            |@timestamp@     |Converts the @timestamp@ argument into 
a @date@ type|
-|@toTimestamp@       |@timeuuid@      |Converts the @timeuuid@ argument into a 
@timestamp@ type|
-|@toTimestamp@       |@date@          |Converts the @date@ argument into a 
@timestamp@ type|
-|@toUnixTimestamp@   |@timeuuid@      |Converts the @timeuuid@ argument into a 
@bigInt@ raw value|
-|@toUnixTimestamp@   |@timestamp@     |Converts the @timestamp@ argument into 
a @bigInt@ raw value|
-|@toUnixTimestamp@   |@date@          |Converts the @date@ argument into a 
@bigInt@ raw value|
-|@dateOf@            |@timeuuid@      |Similar to @toTimestamp(timeuuid)@ 
(DEPRECATED)|
-|@unixTimestampOf@   |@timeuuid@      |Similar to @toUnixTimestamp(timeuuid)@ 
(DEPRECATED)|
-
-h3(#blobFun). Blob conversion functions
-
-A number of functions are provided to "convert" the native types into binary 
data (@blob@). For every @<native-type>@ @type@ supported by CQL3 (a notable 
exceptions is @blob@, for obvious reasons), the function @typeAsBlob@ takes a 
argument of type @type@ and return it as a @blob@.  Conversely, the function 
@blobAsType@ takes a 64-bit @blob@ argument and convert it to a @bigint@ value. 
 And so for instance, @bigintAsBlob(3)@ is @0x0000000000000003@ and 
@blobAsBigint(0x0000000000000003)@ is @3@.
-
-h2(#aggregates). Aggregates
-
-Aggregate functions work on a set of rows. They receive values for each row 
and returns one value for the whole set.
-If @normal@ columns, @scalar functions@, @UDT@ fields, @writetime@ or @ttl@ 
are selected together with aggregate functions, the values returned for them 
will be the ones of the first row matching the query.
-
-CQL3 distinguishes between built-in aggregates (so called 'native aggregates') 
and "user-defined aggregates":#udas. CQL3 includes several native aggregates, 
described below:
-
-h3(#countFct). Count
-
-The @count@ function can be used to count the rows returned by a query. 
Example:
-
-bc(sample). 
-SELECT COUNT(*) FROM plays;
-SELECT COUNT(1) FROM plays;
-
-It also can be used to count the non null value of a given column. Example:
-
-bc(sample). 
-SELECT COUNT(scores) FROM plays;
-
-h3(#maxMinFcts). Max and Min
-
-The @max@ and @min@ functions can be used to compute the maximum and the 
minimum value returned by a query for a given column.
-
-bc(sample). 
-SELECT MIN(players), MAX(players) FROM plays WHERE game = 'quake';
-
-h3(#sumFct). Sum
-
-The @sum@ function can be used to sum up all the values returned by a query 
for a given column.
-
-bc(sample). 
-SELECT SUM(players) FROM plays;
-
-h3(#avgFct). Avg
-
-The @avg@ function can be used to compute the average of all the values 
returned by a query for a given column.
-
-bc(sample). 
-SELECT AVG(players) FROM plays;
-
-h2(#udfs). User-Defined Functions
-
-User-defined functions allow execution of user-provided code in Cassandra. By 
default, Cassandra supports defining functions in _Java_ and _JavaScript_. 
Support for other JSR 223 compliant scripting languages (such as Python, Ruby, 
and Scala) has been removed in 3.0.11.
-
-UDFs are part of the Cassandra schema.  As such, they are automatically 
propagated to all nodes in the cluster.
-
-UDFs can be _overloaded_ - i.e. multiple UDFs with different argument types 
but the same function name. Example:
-
-bc(sample). 
-CREATE FUNCTION sample ( arg int ) ...;
-CREATE FUNCTION sample ( arg text ) ...;
-
-User-defined functions are susceptible to all of the normal problems with the 
chosen programming language.  Accordingly, implementations should be safe 
against null pointer exceptions, illegal arguments, or any other potential 
source of exceptions.  An exception during function execution will result in 
the entire statement failing.
-
-It is valid to use _complex_ types like collections, tuple types and 
user-defined types as argument and return types. Tuple types and user-defined 
types are handled by the conversion functions of the DataStax Java Driver. 
Please see the documentation of the Java Driver for details on handling tuple 
types and user-defined types.
-
-Arguments for functions can be literals or terms. Prepared statement 
placeholders can be used, too.
-
-Note that you can use the double-quoted string syntax to enclose the UDF 
source code. For example:
-
-bc(sample).. 
-CREATE FUNCTION some_function ( arg int )
-  RETURNS NULL ON NULL INPUT
-  RETURNS int
-  LANGUAGE java
-  AS $$ return arg; $$;
-
-SELECT some_function(column) FROM atable ...;
-UPDATE atable SET col = some_function(?) ...;
-p. 
-
-bc(sample). 
-CREATE TYPE custom_type (txt text, i int);
-CREATE FUNCTION fct_using_udt ( udtarg frozen<custom_type> )
-  RETURNS NULL ON NULL INPUT
-  RETURNS text
-  LANGUAGE java
-  AS $$ return udtarg.getString("txt"); $$;
-
-User-defined functions can be used in "@SELECT@":#selectStmt, 
"@INSERT@":#insertStmt and "@UPDATE@":#updateStmt statements.
-
-The implicitly available @udfContext@ field (or binding for script UDFs) 
provides the neccessary functionality to create new UDT and tuple values.
-
-bc(sample). 
-CREATE TYPE custom_type (txt text, i int);
-CREATE FUNCTION fct_using_udt ( somearg int )
-  RETURNS NULL ON NULL INPUT
-  RETURNS custom_type
-  LANGUAGE java
-  AS $$
-    UDTValue udt = udfContext.newReturnUDTValue();
-    udt.setString("txt", "some string");
-    udt.setInt("i", 42);
-    return udt;
-  $$;
-
-The definition of the @UDFContext@ interface can be found in the Apache 
Cassandra source code for @org.apache.cassandra.cql3.functions.UDFContext@.
-
-bc(sample). 
-public interface UDFContext
-{
-    UDTValue newArgUDTValue(String argName);
-    UDTValue newArgUDTValue(int argNum);
-    UDTValue newReturnUDTValue();
-    UDTValue newUDTValue(String udtName);
-    TupleValue newArgTupleValue(String argName);
-    TupleValue newArgTupleValue(int argNum);
-    TupleValue newReturnTupleValue();
-    TupleValue newTupleValue(String cqlDefinition);
-}
-
-Java UDFs already have some imports for common interfaces and classes defined. 
These imports are:
-Please note, that these convenience imports are not available for script UDFs.
-
-bc(sample). 
-import java.nio.ByteBuffer;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import org.apache.cassandra.cql3.functions.UDFContext;
-import com.datastax.driver.core.TypeCodec;
-import com.datastax.driver.core.TupleValue;
-import com.datastax.driver.core.UDTValue;
-
-See "@CREATE FUNCTION@":#createFunctionStmt and "@DROP 
FUNCTION@":#dropFunctionStmt.
-
-h2(#udas). User-Defined Aggregates
-
-User-defined aggregates allow creation of custom aggregate functions using 
"UDFs":#udfs. Common examples of aggregate functions are _count_, _min_, and 
_max_.
-
-Each aggregate requires an _initial state_ (@INITCOND@, which defaults to 
@null@) of type @STYPE@. The first argument of the state function must have 
type @STYPE@. The remaining arguments of the state function must match the 
types of the user-defined aggregate arguments. The state function is called 
once for each row, and the value returned by the state function becomes the new 
state. After all rows are processed, the optional @FINALFUNC@ is executed with 
last state value as its argument.
-
-@STYPE@ is mandatory in order to be able to distinguish possibly overloaded 
versions of the state and/or final function (since the overload can appear 
after creation of the aggregate).
-
-User-defined aggregates can be used in "@SELECT@":#selectStmt statement.
-
-A complete working example for user-defined aggregates (assuming that a 
keyspace has been selected using the "@USE@":#useStmt statement):
-
-bc(sample).. 
-CREATE OR REPLACE FUNCTION averageState ( state tuple<int,bigint>, val int )
-  CALLED ON NULL INPUT
-  RETURNS tuple<int,bigint>
-  LANGUAGE java
-  AS '
-    if (val != null) {
-      state.setInt(0, state.getInt(0)+1);
-      state.setLong(1, state.getLong(1)+val.intValue());
-    }
-    return state;
-  ';
-
-CREATE OR REPLACE FUNCTION averageFinal ( state tuple<int,bigint> )
-  CALLED ON NULL INPUT
-  RETURNS double
-  LANGUAGE java
-  AS '
-    double r = 0;
-    if (state.getInt(0) == 0) return null;
-    r = state.getLong(1);
-    r /= state.getInt(0);
-    return Double.valueOf(r);
-  ';
-
-CREATE OR REPLACE AGGREGATE average ( int )
-  SFUNC averageState
-  STYPE tuple<int,bigint>
-  FINALFUNC averageFinal
-  INITCOND (0, 0);
-
-CREATE TABLE atable (
-  pk int PRIMARY KEY,
-  val int);
-INSERT INTO atable (pk, val) VALUES (1,1);
-INSERT INTO atable (pk, val) VALUES (2,2);
-INSERT INTO atable (pk, val) VALUES (3,3);
-INSERT INTO atable (pk, val) VALUES (4,4);
-SELECT average(val) FROM atable;
-p. 
-
-See "@CREATE AGGREGATE@":#createAggregateStmt and "@DROP 
AGGREGATE@":#dropAggregateStmt.
-
-h2(#json). JSON Support
-
-Cassandra 2.2 introduces JSON support to "@SELECT@":#selectStmt and 
"@INSERT@":#insertStmt statements.  This support does not fundamentally alter 
the CQL API (for example, the schema is still enforced), it simply provides a 
convenient way to work with JSON documents.
-
-h3(#selectJson). SELECT JSON
-
-With @SELECT@ statements, the new @JSON@ keyword can be used to return each 
row as a single @JSON@ encoded map.  The remainder of the @SELECT@ statment 
behavior is the same.
-
-The result map keys are the same as the column names in a normal result set.  
For example, a statement like "@SELECT JSON a, ttl(b) FROM ...@" would result 
in a map with keys @"a"@ and @"ttl(b)"@.  However, this is one notable 
exception: for symmetry with @INSERT JSON@ behavior, case-sensitive column 
names with upper-case letters will be surrounded with double quotes.  For 
example, "@SELECT JSON myColumn FROM ...@" would result in a map key 
@"\"myColumn\""@ (note the escaped quotes).
-
-The map values will @JSON@-encoded representations (as described below) of the 
result set values.
-
-h3(#insertJson). INSERT JSON
-
-With @INSERT@ statements, the new @JSON@ keyword can be used to enable 
inserting a @JSON@ encoded map as a single row.  The format of the @JSON@ map 
should generally match that returned by a @SELECT JSON@ statement on the same 
table.  In particular, case-sensitive column names should be surrounded with 
double quotes.  For example, to insert into a table with two columns named 
"myKey" and "value", you would do the following:
-
-bc(sample). 
-INSERT INTO mytable JSON '{"\"myKey\"": 0, "value": 0}'
-
-Any columns which are ommitted from the @JSON@ map will be defaulted to a 
@NULL@ value (which will result in a tombstone being created).
-
-h3(#jsonEncoding). JSON Encoding of Cassandra Data Types
-
-Where possible, Cassandra will represent and accept data types in their native 
@JSON@ representation.  Cassandra will also accept string representations 
matching the CQL literal format for all single-field types.  For example, 
floats, ints, UUIDs, and dates can be represented by CQL literal strings.  
However, compound types, such as collections, tuples, and user-defined types 
must be represented by native @JSON@ collections (maps and lists) or a 
JSON-encoded string representation of the  [...]
-
-The following table describes the encodings that Cassandra will accept in 
@INSERT JSON@ values (and @fromJson()@ arguments) as well as the format 
Cassandra will use when returning data for @SELECT JSON@ statements (and 
@fromJson()@):
-
-|_. type    |_. formats accepted   |_. return format |_. notes|
-|@ascii@    |string                |string           |Uses JSON's @\u@ 
character escape|
-|@bigint@   |integer, string       |integer          |String must be valid 64 
bit integer|
-|@blob@     |string                |string           |String should be 0x 
followed by an even number of hex digits|
-|@boolean@  |boolean, string       |boolean          |String must be "true" or 
"false"|
-|@date@     |string                |string           |Date in format 
@YYYY-MM-DD@, timezone UTC|
-|@decimal@  |integer, float, string|float            |May exceed 32 or 64-bit 
IEEE-754 floating point precision in client-side decoder|
-|@double@   |integer, float, string|float            |String must be valid 
integer or float|
-|@float@    |integer, float, string|float            |String must be valid 
integer or float|
-|@inet@     |string                |string           |IPv4 or IPv6 address|
-|@int@      |integer, string       |integer          |String must be valid 32 
bit integer|
-|@list@     |list, string          |list             |Uses JSON's native list 
representation|
-|@map@      |map, string           |map              |Uses JSON's native map 
representation|
-|@smallint@ |integer, string       |integer          |String must be valid 16 
bit integer|
-|@set@      |list, string          |list             |Uses JSON's native list 
representation|
-|@text@     |string                |string           |Uses JSON's @\u@ 
character escape|
-|@time@     |string                |string           |Time of day in format 
@HH-MM-SS[.fffffffff]@|
-|@timestamp@|integer, string       |string           |A timestamp. Strings 
constant are allow to input timestamps as dates, see "Working with 
dates":#usingdates below for more information.  Datestamps with format 
@YYYY-MM-DD HH:MM:SS.SSS@ are returned.|
-|@timeuuid@ |string                |string           |Type 1 UUID. See 
"Constants":#constants for the UUID format|
-|@tinyint@  |integer, string       |integer          |String must be valid 8 
bit integer|
-|@tuple@    |list, string          |list             |Uses JSON's native list 
representation|
-|@UDT@      |map, string           |map              |Uses JSON's native map 
representation with field names as keys|
-|@uuid@     |string                |string           |See 
"Constants":#constants for the UUID format|
-|@varchar@  |string                |string           |Uses JSON's @\u@ 
character escape|
-|@varint@   |integer, string       |integer          |Variable length; may 
overflow 32 or 64 bit integers in client-side decoder|
-
-h3(#fromJson). The fromJson() Function
-
-The @fromJson()@ function may be used similarly to @INSERT JSON@, but for a 
single column value.  It may only be used in the @VALUES@ clause of an @INSERT@ 
statement or as one of the column values in an @UPDATE@, @DELETE@, or @SELECT@ 
statement.  For example, it cannot be used in the selection clause of a 
@SELECT@ statement.
-
-h3(#toJson). The toJson() Function
-
-The @toJson()@ function may be used similarly to @SELECT JSON@, but for a 
single column value.  It may only be used in the selection clause of a @SELECT@ 
statement.
-
-h2(#appendixA). Appendix A: CQL Keywords
-
-CQL distinguishes between _reserved_ and _non-reserved_ keywords. Reserved 
keywords cannot be used as identifier, they are truly reserved for the language 
(but one can enclose a reserved keyword by double-quotes to use it as an 
identifier). Non-reserved keywords however only have a specific meaning in 
certain context but can used as identifer otherwise. The only _raison d'ĂȘtre_ 
of these non-reserved keywords is convenience: some keyword are non-reserved 
when it was always easy for the pa [...]
-
-|_. Keyword      |_. Reserved? |
-| @ADD@          | yes |
-| @AGGREGATE@    | no  |
-| @ALL@          | no  |
-| @ALLOW@        | yes |
-| @ALTER@        | yes |
-| @AND@          | yes |
-| @APPLY@        | yes |
-| @AS@           | no  |
-| @ASC@          | yes |
-| @ASCII@        | no  |
-| @AUTHORIZE@    | yes |
-| @BATCH@        | yes |
-| @BEGIN@        | yes |
-| @BIGINT@       | no  |
-| @BLOB@         | no  |
-| @BOOLEAN@      | no  |
-| @BY@           | yes |
-| @CALLED@       | no  |
-| @CAST@         | no  |
-| @CLUSTERING@   | no  |
-| @COLUMNFAMILY@ | yes |
-| @COMPACT@      | no  |
-| @CONTAINS@     | no  |
-| @COUNT@        | no  |
-| @COUNTER@      | no  |
-| @CREATE@       | yes |
-| @CUSTOM@       | no  |
-| @DATE@         | no  |
-| @DECIMAL@      | no  |
-| @DEFAULT@      | yes |
-| @DELETE@       | yes |
-| @DESC@         | yes |
-| @DESCRIBE@     | yes |
-| @DISTINCT@     | no  |
-| @DOUBLE@       | no  |
-| @DROP@         | yes |
-| @DURATION@     | no  |
-| @ENTRIES@      | yes |
-| @EXECUTE@      | yes |
-| @EXISTS@       | no  |
-| @FILTERING@    | no  |
-| @FINALFUNC@    | no  |
-| @FLOAT@        | no  |
-| @FROM@         | yes |
-| @FROZEN@       | no  |
-| @FULL@         | yes |
-| @FUNCTION@     | no  |
-| @FUNCTIONS@    | no  |
-| @GRANT@        | yes |
-| @GROUP@        | no  |
-| @IF@           | yes |
-| @IN@           | yes |
-| @INDEX@        | yes |
-| @INET@         | no  |
-| @INFINITY@     | yes |
-| @INITCOND@     | no  |
-| @INPUT@        | no  |
-| @INSERT@       | yes |
-| @INT@          | no  |
-| @INTO@         | yes |
-| @IS@           | yes |
-| @JSON@         | no  |
-| @KEY@          | no  |
-| @KEYS@         | no  |
-| @KEYSPACE@     | yes |
-| @KEYSPACES@    | no  |
-| @LANGUAGE@     | no  |
-| @LIKE@         | no  |
-| @LIMIT@        | yes |
-| @LIST@         | no  |
-| @LOGIN@        | no  |
-| @MAP@          | no  |
-| @MATERIALIZED@ | yes |
-| @MBEAN@        | yes |
-| @MBEANS@       | yes |
-| @MODIFY@       | yes |
-| @NAN@          | yes |
-| @NOLOGIN@      | no  |
-| @NORECURSIVE@  | yes |
-| @NOSUPERUSER@  | no  |
-| @NOT@          | yes |
-| @NULL@         | yes |
-| @OF@           | yes |
-| @ON@           | yes |
-| @OPTIONS@      | no  |
-| @OR@           | yes |
-| @ORDER@        | yes |
-| @PARTITION@    | no  |
-| @PASSWORD@     | no  |
-| @PER@          | no  |
-| @PERMISSION@   | no  |
-| @PERMISSIONS@  | no  |
-| @PRIMARY@      | yes |
-| @RENAME@       | yes |
-| @REPLACE@      | yes |
-| @RETURNS@      | no  |
-| @REVOKE@       | yes |
-| @ROLE@         | no  |
-| @ROLES@        | no  |
-| @SCHEMA@       | yes |
-| @SELECT@       | yes |
-| @SET@          | yes |
-| @SFUNC@        | no  |
-| @SMALLINT@     | no  |
-| @STATIC@       | no  |
-| @STORAGE@      | no  |
-| @STYPE@        | no  |
-| @SUPERUSER@    | no  |
-| @TABLE@        | yes |
-| @TEXT@         | no  |
-| @TIME@         | no  |
-| @TIMESTAMP@    | no  |
-| @TIMEUUID@     | no  |
-| @TINYINT@      | no  |
-| @TO@           | yes |
-| @TOKEN@        | yes |
-| @TRIGGER@      | no  |
-| @TRUNCATE@     | yes |
-| @TTL@          | no  |
-| @TUPLE@        | no  |
-| @TYPE@         | no  |
-| @UNLOGGED@     | yes |
-| @UNSET@        | yes |
-| @UPDATE@       | yes |
-| @USE@          | yes |
-| @USER@         | no  |
-| @USERS@        | no  |
-| @USING@        | yes |
-| @UUID@         | no  |
-| @VALUES@       | no  |
-| @VARCHAR@      | no  |
-| @VARINT@       | no  |
-| @VIEW@         | yes |
-| @WHERE@        | yes |
-| @WITH@         | yes |
-| @WRITETIME@    | no  |
-
-h2(#appendixB). Appendix B: CQL Reserved Types
-
-The following type names are not currently used by CQL, but are reserved for 
potential future use.  User-defined types may not use reserved type names as 
their name.
-
-|_. type      |
-| @bitstring@ |
-| @byte@      |
-| @complex@   |
-| @date@      |
-| @enum@      |
-| @interval@  |
-| @macaddr@   |
-
-h2(#changes). Changes
-
-The following describes the changes in each version of CQL.
-
-h3. 3.4.3
-
-* Support for @GROUP BY@. See "@<group-by>@":#selectGroupBy (see 
"CASSANDRA-10707":https://issues.apache.org/jira/browse/CASSANDRA-10707).
-
-h3. 3.4.2
-
-* Support for selecting elements and slices of a collection 
("CASSANDRA-7396":https://issues.apache.org/jira/browse/CASSANDRA-7396).
-* "@INSERT/UPDATE options@":#updateOptions for tables having a 
default_time_to_live specifying a TTL of 0 will remove the TTL from the 
inserted or updated values
-* "@ALTER TABLE@":#alterTableStmt @ADD@ and @DROP@ now allow mutiple columns 
to be added/removed
-* New "@PER PARTITION LIMIT@":#selectLimit option (see 
"CASSANDRA-7017":https://issues.apache.org/jira/browse/CASSANDRA-7017).
-* "User-defined functions":#udfs can now instantiate @UDTValue@ and 
@TupleValue@ instances via the new @UDFContext@ interface (see 
"CASSANDRA-10818":https://issues.apache.org/jira/browse/CASSANDRA-10818).
-* "User-defined types"#createTypeStmt may now be stored in a non-frozen form, 
allowing individual fields to be updated and deleted in "@UPDATE@ 
statements":#updateStmt and "@DELETE@ statements":#deleteStmt, respectively. 
("CASSANDRA-7423":https://issues.apache.org/jira/browse/CASSANDRA-7423)
-
-h3. 3.4.1
-
-* Adds @CAST@ functions. See "@Cast@":#castFun.
-
-h3. 3.4.0
-
-* Support for "materialized views":#createMVStmt
-* "@DELETE@":#deleteStmt support for inequality expressions and @IN@ 
restrictions on any primary key columns
-* "@UPDATE@":#updateStmt support for @IN@ restrictions on any primary key 
columns
-
-h3. 3.3.1
-
-* The syntax @TRUNCATE TABLE X@ is now accepted as an alias for @TRUNCATE X@
-
-h3. 3.3.0
-
-* Adds new "aggregates":#aggregates
-* User-defined functions are now supported through "@CREATE 
FUNCTION@":#createFunctionStmt and "@DROP FUNCTION@":#dropFunctionStmt.
-* User-defined aggregates are now supported through "@CREATE 
AGGREGATE@":#createAggregateStmt and "@DROP AGGREGATE@":#dropAggregateStmt.
-* Allows double-dollar enclosed strings literals as an alternative to 
single-quote enclosed strings.
-* Introduces Roles to supercede user based authentication and access control
-* "@Date@":#usingdates and "@Time@":usingtime data types have been added
-* "@JSON@":#json support has been added
-* @Tinyint@ and @Smallint@ data types have been added
-* Adds new time conversion functions and deprecate @dateOf@ and 
@unixTimestampOf@. See "@Time conversion functions@":#timeFun
-
-h3. 3.2.0
-
-* User-defined types are now supported through "@CREATE 
TYPE@":#createTypeStmt, "@ALTER TYPE@":#alterTypeStmt, and "@DROP 
TYPE@":#dropTypeStmt
-* "@CREATE INDEX@":#createIndexStmt now supports indexing collection columns, 
including indexing the keys of map collections through the @keys()@ function
-* Indexes on collections may be queried using the new @CONTAINS@ and @CONTAINS 
KEY@ operators
-* Tuple types were added to hold fixed-length sets of typed positional fields 
(see the section on "types":#types )
-* "@DROP INDEX@":#dropIndexStmt now supports optionally specifying a keyspace
-
-h3. 3.1.7
-
-* @SELECT@ statements now support selecting multiple rows in a single 
partition using an @IN@ clause on combinations of clustering columns.  See 
"SELECT WHERE":#selectWhere clauses.
-* @IF NOT EXISTS@ and @IF EXISTS@ syntax is now supported by @CREATE USER@ and 
@DROP USER@ statmenets, respectively.
-
-h3. 3.1.6
-
-* A new "@uuid@ method":#uuidFun has been added.
-* Support for @DELETE ... IF EXISTS@ syntax.
-
-h3. 3.1.5
-
-* It is now possible to group clustering columns in a relatiion, see "SELECT 
WHERE":#selectWhere clauses.
-* Added support for @STATIC@ columns, see "static in CREATE 
TABLE":#createTableStatic.
-
-h3. 3.1.4
-
-* @CREATE INDEX@ now allows specifying options when creating CUSTOM indexes 
(see "CREATE INDEX reference":#createIndexStmt ).
-
-h3. 3.1.3
-
-* Millisecond precision formats have been added to the timestamp parser (see 
"working with dates":#usingtimestamps ).
-
-h3. 3.1.2
-
-* @NaN@ and @Infinity@ has been added as valid float contants. They are now 
reserved keywords. In the unlikely case you we using them as a column 
identifier (or keyspace/table one), you will noew need to double quote them 
(see "quote identifiers":#identifiers ).
-
-h3. 3.1.1
-
-* @SELECT@ statement now allows listing the partition keys (using the 
@DISTINCT@ modifier). See 
"CASSANDRA-4536":https://issues.apache.org/jira/browse/CASSANDRA-4536.
-* The syntax @c IN ?@ is now supported in @WHERE@ clauses. In that case, the 
value expected for the bind variable will be a list of whatever type @c@ is.
-* It is now possible to use named bind variables (using @:name@ instead of 
@?@).
-
-h3. 3.1.0
-
-* "ALTER TABLE":#alterTableStmt @DROP@ option has been reenabled for CQL3 
tables and has new semantics now: the space formerly used by dropped columns 
will now be eventually reclaimed (post-compaction). You should not readd 
previously dropped columns unless you use timestamps with microsecond precision 
(see "CASSANDRA-3919":https://issues.apache.org/jira/browse/CASSANDRA-3919 for 
more details).
-* @SELECT@ statement now supports aliases in select clause. Aliases in WHERE 
and ORDER BY clauses are not supported. See the "section on select":#selectStmt 
for details.
-* @CREATE@ statements for @KEYSPACE@, @TABLE@ and @INDEX@ now supports an @IF 
NOT EXISTS@ condition. Similarly, @DROP@ statements support a @IF EXISTS@ 
condition.
-* @INSERT@ statements optionally supports a @IF NOT EXISTS@ condition and 
@UPDATE@ supports @IF@ conditions.
-
-h3. 3.0.5
-
-* @SELECT@, @UPDATE@, and @DELETE@ statements now allow empty @IN@ relations 
(see "CASSANDRA-5626":https://issues.apache.org/jira/browse/CASSANDRA-5626).
-
-h3. 3.0.4
-
-* Updated the syntax for custom "secondary indexes":#createIndexStmt.
-* Non-equal condition on the partition key are now never supported, even for 
ordering partitioner as this was not correct (the order was *not* the one of 
the type of the partition key). Instead, the @token@ method should always be 
used for range queries on the partition key (see "WHERE clauses":#selectWhere ).
-
-h3. 3.0.3
-
-* Support for custom "secondary indexes":#createIndexStmt has been added.
-
-h3. 3.0.2
-
-* Type validation for the "constants":#constants has been fixed. For instance, 
the implementation used to allow @'2'@ as a valid value for an @int@ column 
(interpreting it has the equivalent of @2@), or @42@ as a valid @blob@ value 
(in which case @42@ was interpreted as an hexadecimal representation of the 
blob). This is no longer the case, type validation of constants is now more 
strict. See the "data types":#types section for details on which constant is 
allowed for which type.
-* The type validation fixed of the previous point has lead to the introduction 
of "blobs constants":#constants to allow inputing blobs. Do note that while 
inputing blobs as strings constant is still supported by this version (to allow 
smoother transition to blob constant), it is now deprecated (in particular the 
"data types":#types section does not list strings constants as valid blobs) and 
will be removed by a future version. If you were using strings as blobs, you 
should thus update yo [...]
-* A number of functions to convert native types to blobs have also been 
introduced. Furthermore the token function is now also allowed in select 
clauses. See the "section on functions":#functions for details.
-
-h3. 3.0.1
-
-* "Date strings":#usingtimestamps (and timestamps) are no longer accepted as 
valid @timeuuid@ values. Doing so was a bug in the sense that date string are 
not valid @timeuuid@, and it was thus resulting in "confusing 
behaviors":https://issues.apache.org/jira/browse/CASSANDRA-4936.  However, the 
following new methods have been added to help working with @timeuuid@: @now@, 
@minTimeuuid@, @maxTimeuuid@ , @dateOf@ and @unixTimestampOf@. See the "section 
dedicated to these methods":#timeuuidF [...]
-* "Float constants"#constants now support the exponent notation. In other 
words, @4.2E10@ is now a valid floating point value.
-
-h2. Versioning
-
-Versioning of the CQL language adheres to the "Semantic 
Versioning":http://semver.org guidelines. Versions take the form X.Y.Z where X, 
Y, and Z are integer values representing major, minor, and patch level 
respectively. There is no correlation between Cassandra release versions and 
the CQL language version.
-
-|_. version|_. description|
-|Major     |The major version _must_ be bumped when backward incompatible 
changes are introduced. This should rarely occur.|
-|Minor     |Minor version increments occur when new, but backward compatible, 
functionality is introduced.|
-|Patch     |The patch version is incremented when bugs are fixed.|
diff --git a/doc/modules/cassandra/pages/cql/changes.adoc 
b/doc/modules/cassandra/pages/cql/changes.adoc
index 1f89469a32..df99a39ef6 100644
--- a/doc/modules/cassandra/pages/cql/changes.adoc
+++ b/doc/modules/cassandra/pages/cql/changes.adoc
@@ -2,6 +2,16 @@
 
 The following describes the changes in each version of CQL.
 
+== 3.4.6
+
+* Add support for IF EXISTS and IF NOT EXISTS in ALTER statements  (`16916`)
+* Allow GRANT/REVOKE multiple permissions in a single statement (`17030`)
+* Pre hashed passwords in CQL (`17334`)
+* Add support for type casting in WHERE clause components and in the values of 
INSERT/UPDATE statements (`14337`)
+* Add support for CONTAINS and CONTAINS KEY in conditional UPDATE and DELETE 
statement (`10537`)
+* Allow to grant permission for all tables in a keyspace (`17027`)
+* Allow to aggregate by time intervals (`11871`)
+
 == 3.4.5
 
 * Adds support for arithmetic operators (`11935`)
diff --git a/doc/modules/cassandra/pages/new/virtualtables.adoc 
b/doc/modules/cassandra/pages/new/virtualtables.adoc
index 3e612580ba..7a7a4befa7 100644
--- a/doc/modules/cassandra/pages/new/virtualtables.adoc
+++ b/doc/modules/cassandra/pages/new/virtualtables.adoc
@@ -124,7 +124,7 @@ cqlsh> SELECT * FROM system_views.clients;
 
------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  address          | 127.0.0.1
  port             | 50687
- client_options   | {'CQL_VERSION': '3.4.5', 'DRIVER_NAME': 'DataStax Python 
Driver', 'DRIVER_VERSION': '3.25.0'}
+ client_options   | {'CQL_VERSION': '3.4.6', 'DRIVER_NAME': 'DataStax Python 
Driver', 'DRIVER_VERSION': '3.25.0'}
  connection_stage | ready
  driver_name      | DataStax Python Driver
  driver_version   | 3.25.0
@@ -140,7 +140,7 @@ cqlsh> SELECT * FROM system_views.clients;
 
------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  address          | 127.0.0.1
  port             | 50688
- client_options   | {'CQL_VERSION': '3.4.5', 'DRIVER_NAME': 'DataStax Python 
Driver', 'DRIVER_VERSION': '3.25.0'}
+ client_options   | {'CQL_VERSION': '3.4.6', 'DRIVER_NAME': 'DataStax Python 
Driver', 'DRIVER_VERSION': '3.25.0'}
  connection_stage | ready
  driver_name      | DataStax Python Driver
  driver_version   | 3.25.0
diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
index f0a0a7425f..64a075ba04 100644
--- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
@@ -79,7 +79,7 @@ import static 
org.apache.cassandra.utils.Clock.Global.nanoTime;
 
 public class QueryProcessor implements QueryHandler
 {
-    public static final CassandraVersion CQL_VERSION = new 
CassandraVersion("3.4.5");
+    public static final CassandraVersion CQL_VERSION = new 
CassandraVersion("3.4.6");
 
     // See comments on QueryProcessor #prepare
     public static final CassandraVersion 
NEW_PREPARED_STATEMENT_BEHAVIOUR_SINCE_30 = new CassandraVersion("3.0.26");


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to