Added: cassandra/site/src/doc/3.10/_sources/cql/functions.txt
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/doc/3.10/_sources/cql/functions.txt?rev=1757419&view=auto
==============================================================================
--- cassandra/site/src/doc/3.10/_sources/cql/functions.txt (added)
+++ cassandra/site/src/doc/3.10/_sources/cql/functions.txt Tue Aug 23 19:25:17 
2016
@@ -0,0 +1,558 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+.. _cql-functions:
+
+.. Need some intro for UDF and native functions in general and point those to 
it.
+.. _udfs:
+.. _native-functions:
+
+Functions
+---------
+
+CQL supports 2 main categories of functions:
+
+- the :ref:`scalar functions <scalar-functions>`, which simply take a number 
of values and produce an output with it.
+- the :ref:`aggregate functions <aggregate-functions>`, which are used to 
aggregate multiple rows results from a
+  ``SELECT`` statement.
+
+In both cases, CQL provides a number of native "hard-coded" functions as well 
as the ability to create new user-defined
+functions.
+
+.. note:: By default, the use of user-defined functions is disabled by default 
for security concerns (even when
+   enabled, the execution of user-defined functions is sandboxed and a "rogue" 
function should not be allowed to do
+   evil, but no sandbox is perfect so using user-defined functions is opt-in). 
See the ``enable_user_defined_functions``
+   in ``cassandra.yaml`` to enable them.
+
+A function is identifier by its name:
+
+.. productionlist::
+   function_name: [ `keyspace_name` '.' ] `name`
+
+.. _scalar-functions:
+
+Scalar functions
+^^^^^^^^^^^^^^^^
+
+.. _scalar-native-functions:
+
+Native functions
+~~~~~~~~~~~~~~~~
+
+Cast
+````
+
+The ``cast`` function can be used to converts one native datatype to another.
+
+The following table describes the conversions supported by the ``cast`` 
function. Cassandra will silently ignore any
+cast converting a datatype into its own datatype.
+
+=============== 
=======================================================================================================
+ From            To
+=============== 
=======================================================================================================
+ ``ascii``       ``text``, ``varchar``
+ ``bigint``      ``tinyint``, ``smallint``, ``int``, ``float``, ``double``, 
``decimal``, ``varint``, ``text``,
+                 ``varchar``
+ ``boolean``     ``text``, ``varchar``
+ ``counter``     ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, 
``double``, ``decimal``, ``varint``,
+                 ``text``, ``varchar``
+ ``date``        ``timestamp``
+ ``decimal``     ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, 
``double``, ``varint``, ``text``,
+                 ``varchar``
+ ``double``      ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, 
``decimal``, ``varint``, ``text``,
+                 ``varchar``
+ ``float``       ``tinyint``, ``smallint``, ``int``, ``bigint``, ``double``, 
``decimal``, ``varint``, ``text``,
+                 ``varchar``
+ ``inet``        ``text``, ``varchar``
+ ``int``         ``tinyint``, ``smallint``, ``bigint``, ``float``, ``double``, 
``decimal``, ``varint``, ``text``,
+                 ``varchar``
+ ``smallint``    ``tinyint``, ``int``, ``bigint``, ``float``, ``double``, 
``decimal``, ``varint``, ``text``,
+                 ``varchar``
+ ``time``        ``text``, ``varchar``
+ ``timestamp``   ``date``, ``text``, ``varchar``
+ ``timeuuid``    ``timestamp``, ``date``, ``text``, ``varchar``
+ ``tinyint``     ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, 
``double``, ``decimal``, ``varint``,
+                 ``text``, ``varchar``
+ ``uuid``        ``text``, ``varchar``
+ ``varint``      ``tinyint``, ``smallint``, ``int``, ``bigint``, ``float``, 
``double``, ``decimal``, ``text``,
+                 ``varchar``
+=============== 
=======================================================================================================
+
+The conversions rely strictly on Java's semantics. For example, the double 
value 1 will be converted to the text value
+'1.0'. For instance::
+
+    SELECT avg(cast(count as double)) FROM myTable
+
+Token
+`````
+
+The ``token`` function allows to compute the token for a given partition key. 
The exact signature of the token function
+depends on the table concerned and of the partitioner used by the cluster.
+
+The type of the arguments of the ``token`` depend on the type of the partition 
key columns. The return type depend on
+the partitioner in use:
+
+- For Murmur3Partitioner, the return type is ``bigint``.
+- For RandomPartitioner, the return type is ``varint``.
+- For ByteOrderedPartitioner, the return type is ``blob``.
+
+For instance, in a cluster using the default Murmur3Partitioner, if a table is 
defined by::
+
+    CREATE TABLE users (
+        userid text PRIMARY KEY,
+        username text,
+    )
+
+then the ``token`` function will take a single argument of type ``text`` (in 
that case, the partition key is ``userid``
+(there is no clustering columns so the partition key is the same than the 
primary key)), and the return type will be
+``bigint``.
+
+Uuid
+````
+The ``uuid`` function takes no parameters and generates a random type 4 uuid 
suitable for use in ``INSERT`` or
+``UPDATE`` statements.
+
+.. _timeuuid-functions:
+
+Timeuuid functions
+``````````````````
+
+``now``
+#######
+
+The ``now`` function takes no arguments and generates, on the coordinator 
node, a new unique timeuuid (at the time where
+the statement using it is executed). Note that this method is useful for 
insertion but is largely non-sensical in
+``WHERE`` clauses. For instance, a query of the form::
+
+    SELECT * FROM myTable WHERE t = now()
+
+will never return any result by design, since the value returned by ``now()`` 
is guaranteed to be unique.
+
+``minTimeuuid`` and ``maxTimeuuid``
+###################################
+
+The ``minTimeuuid`` (resp. ``maxTimeuuid``) function takes a ``timestamp`` 
value ``t`` (which can be `either a timestamp
+or a date string <timestamps>`) and return a *fake* ``timeuuid`` corresponding 
to the *smallest* (resp. *biggest*)
+possible ``timeuuid`` having for timestamp ``t``. So for instance::
+
+    SELECT * FROM myTable
+     WHERE t > maxTimeuuid('2013-01-01 00:05+0000')
+       AND t < minTimeuuid('2013-02-02 10:00+0000')
+
+will select all rows where the ``timeuuid`` column ``t`` is strictly older 
than ``'2013-01-01 00:05+0000'`` but strictly
+younger than ``'2013-02-02 10:00+0000'``. Please note that ``t >= 
maxTimeuuid('2013-01-01 00:05+0000')`` would still
+*not* select a ``timeuuid`` generated exactly at '2013-01-01 00:05+0000' and 
is essentially equivalent to ``t >
+maxTimeuuid('2013-01-01 00:05+0000')``.
+
+.. note:: We called the values generated by ``minTimeuuid`` and 
``maxTimeuuid`` *fake* UUID because they do no respect
+   the Time-Based UUID generation process specified by the `RFC 4122 
<http://www.ietf.org/rfc/rfc4122.txt>`__. In
+   particular, the value returned by these 2 methods will not be unique. This 
means you should only use those methods
+   for querying (as in the example above). Inserting the result of those 
methods is almost certainly *a bad idea*.
+
+Time conversion functions
+`````````````````````````
+
+A number of functions are provided to “convert” a ``timeuuid``, a 
``timestamp`` or a ``date`` into another ``native``
+type.
+
+===================== =============== 
===================================================================
+ Function name         Input type      Description
+===================== =============== 
===================================================================
+ ``toDate``            ``timeuuid``    Converts the ``timeuuid`` argument into 
a ``date`` type
+ ``toDate``            ``timestamp``   Converts the ``timestamp`` argument 
into a ``date`` type
+ ``toTimestamp``       ``timeuuid``    Converts the ``timeuuid`` argument into 
a ``timestamp`` type
+ ``toTimestamp``       ``date``        Converts the ``date`` argument into a 
``timestamp`` type
+ ``toUnixTimestamp``   ``timeuuid``    Converts the ``timeuuid`` argument into 
a ``bigInt`` raw value
+ ``toUnixTimestamp``   ``timestamp``   Converts the ``timestamp`` argument 
into a ``bigInt`` raw value
+ ``toUnixTimestamp``   ``date``        Converts the ``date`` argument into a 
``bigInt`` raw value
+ ``dateOf``            ``timeuuid``    Similar to ``toTimestamp(timeuuid)`` 
(DEPRECATED)
+ ``unixTimestampOf``   ``timeuuid``    Similar to 
``toUnixTimestamp(timeuuid)`` (DEPRECATED)
+===================== =============== 
===================================================================
+
+Blob conversion functions
+`````````````````````````
+A number of functions are provided to “convert” the native types into 
binary data (``blob``). For every
+``<native-type>`` ``type`` supported by CQL (a notable exceptions is ``blob``, 
for obvious reasons), the function
+``typeAsBlob`` takes a argument of type ``type`` and return it as a ``blob``. 
Conversely, the function ``blobAsType``
+takes a 64-bit ``blob`` argument and convert it to a ``bigint`` value. And so 
for instance, ``bigintAsBlob(3)`` is
+``0x0000000000000003`` and ``blobAsBigint(0x0000000000000003)`` is ``3``.
+
+.. _user-defined-scalar-functions:
+
+User-defined functions
+~~~~~~~~~~~~~~~~~~~~~~
+
+User-defined functions allow execution of user-provided code in Cassandra. By 
default, Cassandra supports defining
+functions in *Java* and *JavaScript*. Support for other JSR 223 compliant 
scripting languages (such as Python, Ruby, and
+Scala) can be added by adding a JAR to the classpath.
+
+UDFs are part of the Cassandra schema. As such, they are automatically 
propagated to all nodes in the cluster.
+
+UDFs can be *overloaded* - i.e. multiple UDFs with different argument types 
but the same function name. Example::
+
+    CREATE FUNCTION sample ( arg int ) ...;
+    CREATE FUNCTION sample ( arg text ) ...;
+
+User-defined functions are susceptible to all of the normal problems with the 
chosen programming language. Accordingly,
+implementations should be safe against null pointer exceptions, illegal 
arguments, or any other potential source of
+exceptions. An exception during function execution will result in the entire 
statement failing.
+
+It is valid to use *complex* types like collections, tuple types and 
user-defined types as argument and return types.
+Tuple types and user-defined types are handled by the conversion functions of 
the DataStax Java Driver. Please see the
+documentation of the Java Driver for details on handling tuple types and 
user-defined types.
+
+Arguments for functions can be literals or terms. Prepared statement 
placeholders can be used, too.
+
+Note that you can use the double-quoted string syntax to enclose the UDF 
source code. For example::
+
+    CREATE FUNCTION some_function ( arg int )
+        RETURNS NULL ON NULL INPUT
+        RETURNS int
+        LANGUAGE java
+        AS $$ return arg; $$;
+
+    SELECT some_function(column) FROM atable ...;
+    UPDATE atable SET col = some_function(?) ...;
+
+    CREATE TYPE custom_type (txt text, i int);
+    CREATE FUNCTION fct_using_udt ( udtarg frozen )
+        RETURNS NULL ON NULL INPUT
+        RETURNS text
+        LANGUAGE java
+        AS $$ return udtarg.getString("txt"); $$;
+
+User-defined functions can be used in ``SELECT``, ``INSERT`` and ``UPDATE`` 
statements.
+
+The implicitly available ``udfContext`` field (or binding for script UDFs) 
provides the necessary functionality to
+create new UDT and tuple values::
+
+    CREATE TYPE custom_type (txt text, i int);
+    CREATE FUNCTION fct\_using\_udt ( somearg int )
+        RETURNS NULL ON NULL INPUT
+        RETURNS custom_type
+        LANGUAGE java
+        AS $$
+            UDTValue udt = udfContext.newReturnUDTValue();
+            udt.setString("txt", "some string");
+            udt.setInt("i", 42);
+            return udt;
+        $$;
+
+The definition of the ``UDFContext`` interface can be found in the Apache 
Cassandra source code for
+``org.apache.cassandra.cql3.functions.UDFContext``.
+
+.. code-block:: java
+
+    public interface UDFContext
+    {
+        UDTValue newArgUDTValue(String argName);
+        UDTValue newArgUDTValue(int argNum);
+        UDTValue newReturnUDTValue();
+        UDTValue newUDTValue(String udtName);
+        TupleValue newArgTupleValue(String argName);
+        TupleValue newArgTupleValue(int argNum);
+        TupleValue newReturnTupleValue();
+        TupleValue newTupleValue(String cqlDefinition);
+    }
+
+Java UDFs already have some imports for common interfaces and classes defined. 
These imports are:
+
+.. code-block:: java
+
+    import java.nio.ByteBuffer;
+    import java.util.List;
+    import java.util.Map;
+    import java.util.Set;
+    import org.apache.cassandra.cql3.functions.UDFContext;
+    import com.datastax.driver.core.TypeCodec;
+    import com.datastax.driver.core.TupleValue;
+    import com.datastax.driver.core.UDTValue;
+
+Please note, that these convenience imports are not available for script UDFs.
+
+.. _create-function-statement:
+
+CREATE FUNCTION
+```````````````
+
+Creating a new user-defined function uses the ``CREATE FUNCTION`` statement:
+
+.. productionlist::
+   create_function_statement: CREATE [ OR REPLACE ] FUNCTION [ IF NOT EXISTS]
+                            :     `function_name` '(' `arguments_declaration` 
')'
+                            :     [ CALLED | RETURNS NULL ] ON NULL INPUT
+                            :     RETURNS `cql_type`
+                            :     LANGUAGE `identifier`
+                            :     AS `string`
+   arguments_declaration: `identifier` `cql_type` ( ',' `identifier` 
`cql_type` )*
+
+For instance::
+
+    CREATE OR REPLACE FUNCTION somefunction(somearg int, anotherarg text, 
complexarg frozen<someUDT>, listarg list)
+        RETURNS NULL ON NULL INPUT
+        RETURNS text
+        LANGUAGE java
+        AS $$
+            // some Java code
+        $$;
+
+    CREATE FUNCTION IF NOT EXISTS akeyspace.fname(someArg int)
+        CALLED ON NULL INPUT
+        RETURNS text
+        LANGUAGE java
+        AS $$
+            // some Java code
+        $$;
+
+``CREATE FUNCTION`` with the optional ``OR REPLACE`` keywords either creates a 
function or replaces an existing one with
+the same signature. A ``CREATE FUNCTION`` without ``OR REPLACE`` fails if a 
function with the same signature already
+exists.
+
+If the optional ``IF NOT EXISTS`` keywords are used, the function will
+only be created if another function with the same signature does not
+exist.
+
+``OR REPLACE`` and ``IF NOT EXISTS`` cannot be used together.
+
+Behavior on invocation with ``null`` values must be defined for each
+function. There are two options:
+
+#. ``RETURNS NULL ON NULL INPUT`` declares that the function will always
+   return ``null`` if any of the input arguments is ``null``.
+#. ``CALLED ON NULL INPUT`` declares that the function will always be
+   executed.
+
+Function Signature
+##################
+
+Signatures are used to distinguish individual functions. The signature 
consists of:
+
+#. The fully qualified function name - i.e *keyspace* plus *function-name*
+#. The concatenated list of all argument types
+
+Note that keyspace names, function names and argument types are subject to the 
default naming conventions and
+case-sensitivity rules.
+
+Functions belong to a keyspace. If no keyspace is specified in 
``<function-name>``, the current keyspace is used (i.e.
+the keyspace specified using the ``USE`` statement). It is not possible to 
create a user-defined function in one of the
+system keyspaces.
+
+.. _drop-function-statement:
+
+DROP FUNCTION
+`````````````
+
+Dropping a function uses the ``DROP FUNCTION`` statement:
+
+.. productionlist::
+   drop_function_statement: DROP FUNCTION [ IF EXISTS ] `function_name` [ '(' 
`arguments_signature` ')' ]
+   arguments_signature: `cql_type` ( ',' `cql_type` )*
+
+For instance::
+
+    DROP FUNCTION myfunction;
+    DROP FUNCTION mykeyspace.afunction;
+    DROP FUNCTION afunction ( int );
+    DROP FUNCTION afunction ( text );
+
+You must specify the argument types (:token:`arguments_signature`) of the 
function to drop if there are multiple
+functions with the same name but a different signature (overloaded functions).
+
+``DROP FUNCTION`` with the optional ``IF EXISTS`` keywords drops a function if 
it exists, but does not throw an error if
+it doesn't
+
+.. _aggregate-functions:
+
+Aggregate functions
+^^^^^^^^^^^^^^^^^^^
+
+Aggregate functions work on a set of rows. They receive values for each row 
and returns one value for the whole set.
+
+If ``normal`` columns, ``scalar functions``, ``UDT`` fields, ``writetime`` or 
``ttl`` are selected together with
+aggregate functions, the values returned for them will be the ones of the 
first row matching the query.
+
+Native aggregates
+~~~~~~~~~~~~~~~~~
+
+.. _count-function:
+
+Count
+`````
+
+The ``count`` function can be used to count the rows returned by a query. 
Example::
+
+    SELECT COUNT (*) FROM plays;
+    SELECT COUNT (1) FROM plays;
+
+It also can be used to count the non null value of a given column::
+
+    SELECT COUNT (scores) FROM plays;
+
+Max and Min
+```````````
+
+The ``max`` and ``min`` functions can be used to compute the maximum and the 
minimum value returned by a query for a
+given column. For instance::
+
+    SELECT MIN (players), MAX (players) FROM plays WHERE game = 'quake';
+
+Sum
+```
+
+The ``sum`` function can be used to sum up all the values returned by a query 
for a given column. For instance::
+
+    SELECT SUM (players) FROM plays;
+
+Avg
+```
+
+The ``avg`` function can be used to compute the average of all the values 
returned by a query for a given column. For
+instance::
+
+    SELECT AVG (players) FROM plays;
+
+.. _user-defined-aggregates-functions:
+
+User-Defined Aggregates
+~~~~~~~~~~~~~~~~~~~~~~~
+
+User-defined aggregates allow the creation of custom aggregate functions. 
Common examples of aggregate functions are
+*count*, *min*, and *max*.
+
+Each aggregate requires an *initial state* (``INITCOND``, which defaults to 
``null``) of type ``STYPE``. The first
+argument of the state function must have type ``STYPE``. The remaining 
arguments of the state function must match the
+types of the user-defined aggregate arguments. The state function is called 
once for each row, and the value returned by
+the state function becomes the new state. After all rows are processed, the 
optional ``FINALFUNC`` is executed with last
+state value as its argument.
+
+``STYPE`` is mandatory in order to be able to distinguish possibly overloaded 
versions of the state and/or final
+function (since the overload can appear after creation of the aggregate).
+
+User-defined aggregates can be used in ``SELECT`` statement.
+
+A complete working example for user-defined aggregates (assuming that a 
keyspace has been selected using the ``USE``
+statement)::
+
+    CREATE OR REPLACE FUNCTION averageState(state tuple<int,bigint>, val int)
+        CALLED ON NULL INPUT
+        RETURNS tuple
+        LANGUAGE java
+        AS $$
+            if (val != null) {
+                state.setInt(0, state.getInt(0)+1);
+                state.setLong(1, state.getLong(1)+val.intValue());
+            }
+            return state;
+        $$;
+
+    CREATE OR REPLACE FUNCTION averageFinal (state tuple<int,bigint>)
+        CALLED ON NULL INPUT
+        RETURNS double
+        LANGUAGE java
+        AS $$
+            double r = 0;
+            if (state.getInt(0) == 0) return null;
+            r = state.getLong(1);
+            r /= state.getInt(0);
+            return Double.valueOf(r);
+        $$;
+
+    CREATE OR REPLACE AGGREGATE average(int)
+        SFUNC averageState
+        STYPE tuple
+        FINALFUNC averageFinal
+        INITCOND (0, 0);
+
+    CREATE TABLE atable (
+        pk int PRIMARY KEY,
+        val int
+    );
+
+    INSERT INTO atable (pk, val) VALUES (1,1);
+    INSERT INTO atable (pk, val) VALUES (2,2);
+    INSERT INTO atable (pk, val) VALUES (3,3);
+    INSERT INTO atable (pk, val) VALUES (4,4);
+
+    SELECT average(val) FROM atable;
+
+.. _create-aggregate-statement:
+
+CREATE AGGREGATE
+````````````````
+
+Creating (or replacing) a user-defined aggregate function uses the ``CREATE 
AGGREGATE`` statement:
+
+.. productionlist::
+   create_aggregate_statement: CREATE [ OR REPLACE ] AGGREGATE [ IF NOT EXISTS 
]
+                             :     `function_name` '(' `arguments_signature` 
')'
+                             :     SFUNC `function_name`
+                             :     STYPE `cql_type`
+                             :     [ FINALFUNC `function_name` ]
+                             :     [ INITCOND `term` ]
+
+See above for a complete example.
+
+``CREATE AGGREGATE`` with the optional ``OR REPLACE`` keywords either creates 
an aggregate or replaces an existing one
+with the same signature. A ``CREATE AGGREGATE`` without ``OR REPLACE`` fails 
if an aggregate with the same signature
+already exists.
+
+``CREATE AGGREGATE`` with the optional ``IF NOT EXISTS`` keywords either 
creates an aggregate if it does not already
+exist.
+
+``OR REPLACE`` and ``IF NOT EXISTS`` cannot be used together.
+
+``STYPE`` defines the type of the state value and must be specified.
+
+The optional ``INITCOND`` defines the initial state value for the aggregate. 
It defaults to ``null``. A non-\ ``null``
+``INITCOND`` must be specified for state functions that are declared with 
``RETURNS NULL ON NULL INPUT``.
+
+``SFUNC`` references an existing function to be used as the state modifying 
function. The type of first argument of the
+state function must match ``STYPE``. The remaining argument types of the state 
function must match the argument types of
+the aggregate function. State is not updated for state functions declared with 
``RETURNS NULL ON NULL INPUT`` and called
+with ``null``.
+
+The optional ``FINALFUNC`` is called just before the aggregate result is 
returned. It must take only one argument with
+type ``STYPE``. The return type of the ``FINALFUNC`` may be a different type. 
A final function declared with ``RETURNS
+NULL ON NULL INPUT`` means that the aggregate's return value will be ``null``, 
if the last state is ``null``.
+
+If no ``FINALFUNC`` is defined, the overall return type of the aggregate 
function is ``STYPE``. If a ``FINALFUNC`` is
+defined, it is the return type of that function.
+
+.. _drop-aggregate-statement:
+
+DROP AGGREGATE
+``````````````
+
+Dropping an user-defined aggregate function uses the ``DROP AGGREGATE`` 
statement:
+
+.. productionlist::
+   drop_aggregate_statement: DROP AGGREGATE [ IF EXISTS ] `function_name` [ 
'(' `arguments_signature` ')' ]
+
+For instance::
+
+    DROP AGGREGATE myAggregate;
+    DROP AGGREGATE myKeyspace.anAggregate;
+    DROP AGGREGATE someAggregate ( int );
+    DROP AGGREGATE someAggregate ( text );
+
+The ``DROP AGGREGATE`` statement removes an aggregate created using ``CREATE 
AGGREGATE``. You must specify the argument
+types of the aggregate to drop if there are multiple aggregates with the same 
name but a different signature (overloaded
+aggregates).
+
+``DROP AGGREGATE`` with the optional ``IF EXISTS`` keywords drops an aggregate 
if it exists, and does nothing if a
+function with the signature does not exist.

Added: cassandra/site/src/doc/3.10/_sources/cql/index.txt
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/doc/3.10/_sources/cql/index.txt?rev=1757419&view=auto
==============================================================================
--- cassandra/site/src/doc/3.10/_sources/cql/index.txt (added)
+++ cassandra/site/src/doc/3.10/_sources/cql/index.txt Tue Aug 23 19:25:17 2016
@@ -0,0 +1,47 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. _cql:
+
+The Cassandra Query Language (CQL)
+==================================
+
+This document describes the Cassandra Query Language (CQL) [#]_. Note that 
this document describes the last version of
+the languages. However, the `changes <#changes>`_ section provides the diff 
between the different versions of CQL.
+
+CQL offers a model close to SQL in the sense that data is put in *tables* 
containing *rows* of *columns*. For
+that reason, when used in this document, these terms (tables, rows and 
columns) have the same definition than they have
+in SQL. But please note that as such, they do **not** refer to the concept of 
rows and columns found in the deprecated
+thrift API (and earlier version 1 and 2 of CQL).
+
+.. toctree::
+   :maxdepth: 2
+
+   definitions
+   types
+   ddl
+   dml
+   indexes
+   mvs
+   security
+   functions
+   json
+   triggers
+   appendices
+   changes
+
+.. [#] Technically, this document CQL version 3, which is not backward 
compatible with CQL version 1 and 2 (which have
+   been deprecated and remove) and differs from it in numerous ways.

Added: cassandra/site/src/doc/3.10/_sources/cql/indexes.txt
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/doc/3.10/_sources/cql/indexes.txt?rev=1757419&view=auto
==============================================================================
--- cassandra/site/src/doc/3.10/_sources/cql/indexes.txt (added)
+++ cassandra/site/src/doc/3.10/_sources/cql/indexes.txt Tue Aug 23 19:25:17 
2016
@@ -0,0 +1,83 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+.. _secondary-indexes:
+
+Secondary Indexes
+-----------------
+
+CQL supports creating secondary indexes on tables, allowing queries on the 
table to use those indexes. A secondary index
+is identified by a name defined by:
+
+.. productionlist::
+   index_name: re('[a-zA-Z_0-9]+')
+
+
+
+.. _create-index-statement:
+
+CREATE INDEX
+^^^^^^^^^^^^
+
+Creating a secondary index on a table uses the ``CREATE INDEX`` statement:
+
+.. productionlist::
+   create_index_statement: CREATE [ CUSTOM ] INDEX [ IF NOT EXISTS ] [ 
`index_name` ]
+                         :     ON `table_name` '(' `index_identifier` ')'
+                         :     [ USING `string` [ WITH OPTIONS = `map_literal` 
] ]
+   index_identifier: `column_name`
+                   :| ( KEYS | VALUES | ENTRIES | FULL ) '(' `column_name` ')'
+
+For instance::
+
+    CREATE INDEX userIndex ON NerdMovies (user);
+    CREATE INDEX ON Mutants (abilityId);
+    CREATE INDEX ON users (keys(favs));
+    CREATE CUSTOM INDEX ON users (email) USING 'path.to.the.IndexClass';
+    CREATE CUSTOM INDEX ON users (email) USING 'path.to.the.IndexClass' WITH 
OPTIONS = {'storage': '/mnt/ssd/indexes/'};
+
+The ``CREATE INDEX`` statement is used to create a new (automatic) secondary 
index for a given (existing) column in a
+given table. A name for the index itself can be specified before the ``ON`` 
keyword, if desired. If data already exists
+for the column, it will be indexed asynchronously. After the index is created, 
new data for the column is indexed
+automatically at insertion time.
+
+Attempting to create an already existing index will return an error unless the 
``IF NOT EXISTS`` option is used. If it
+is used, the statement will be a no-op if the index already exists.
+
+Indexes on Map Keys
+~~~~~~~~~~~~~~~~~~~
+
+When creating an index on a :ref:`maps <maps>`, you may index either the keys 
or the values. If the column identifier is
+placed within the ``keys()`` function, the index will be on the map keys, 
allowing you to use ``CONTAINS KEY`` in
+``WHERE`` clauses. Otherwise, the index will be on the map values.
+
+.. _drop-index-statement:
+
+DROP INDEX
+^^^^^^^^^^
+
+Dropping a secondary index uses the ``DROP INDEX`` statement:
+
+.. productionlist::
+   drop_index_statement: DROP INDEX [ IF EXISTS ] `index_name`
+
+The ``DROP INDEX`` statement is used to drop an existing secondary index. The 
argument of the statement is the index
+name, which may optionally specify the keyspace of the index.
+
+If the index does not exists, the statement will return an error, unless ``IF 
EXISTS`` is used in which case the
+operation is a no-op.

Added: cassandra/site/src/doc/3.10/_sources/cql/json.txt
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/doc/3.10/_sources/cql/json.txt?rev=1757419&view=auto
==============================================================================
--- cassandra/site/src/doc/3.10/_sources/cql/json.txt (added)
+++ cassandra/site/src/doc/3.10/_sources/cql/json.txt Tue Aug 23 19:25:17 2016
@@ -0,0 +1,115 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+.. _cql-json:
+
+JSON Support
+------------
+
+Cassandra 2.2 introduces JSON support to :ref:`SELECT <select-statement>` and 
:ref:`INSERT <insert-statement>`
+statements. This support does not fundamentally alter the CQL API (for 
example, the schema is still enforced), it simply
+provides a convenient way to work with JSON documents.
+
+SELECT JSON
+^^^^^^^^^^^
+
+With ``SELECT`` statements, the ``JSON`` keyword can be used to return each 
row as a single ``JSON`` encoded map. The
+remainder of the ``SELECT`` statement behavior is the same.
+
+The result map keys are the same as the column names in a normal result set. 
For example, a statement like ``SELECT JSON
+a, ttl(b) FROM ...`` would result in a map with keys ``"a"`` and ``"ttl(b)"``. 
However, this is one notable exception:
+for symmetry with ``INSERT JSON`` behavior, case-sensitive column names with 
upper-case letters will be surrounded with
+double quotes. For example, ``SELECT JSON myColumn FROM ...`` would result in 
a map key ``"\"myColumn\""`` (note the
+escaped quotes).
+
+The map values will ``JSON``-encoded representations (as described below) of 
the result set values.
+
+INSERT JSON
+^^^^^^^^^^^
+
+With ``INSERT`` statements, the new ``JSON`` keyword can be used to enable 
inserting a ``JSON`` encoded map as a single
+row. The format of the ``JSON`` map should generally match that returned by a 
``SELECT JSON`` statement on the same
+table. In particular, case-sensitive column names should be surrounded with 
double quotes. For example, to insert into a
+table with two columns named "myKey" and "value", you would do the following::
+
+    INSERT INTO mytable JSON '{ "\"myKey\"": 0, "value": 0}'
+
+By default (or if ``DEFAULT NULL`` is explicitly used), a column omitted from 
the ``JSON`` map will be set to ``NULL``,
+meaning that any pre-existing value for that column will be removed (resulting 
in a tombstone being created).
+Alternatively, if the ``DEFAULT UNSET`` directive is used after the value, 
omitted column values will be left unset,
+meaning that pre-existing values for those column will be preserved.
+
+
+JSON Encoding of Cassandra Data Types
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Where possible, Cassandra will represent and accept data types in their native 
``JSON`` representation. Cassandra will
+also accept string representations matching the CQL literal format for all 
single-field types. For example, floats,
+ints, UUIDs, and dates can be represented by CQL literal strings. However, 
compound types, such as collections, tuples,
+and user-defined types must be represented by native ``JSON`` collections 
(maps and lists) or a JSON-encoded string
+representation of the collection.
+
+The following table describes the encodings that Cassandra will accept in 
``INSERT JSON`` values (and ``fromJson()``
+arguments) as well as the format Cassandra will use when returning data for 
``SELECT JSON`` statements (and
+``fromJson()``):
+
+=============== ======================== =============== 
==============================================================
+ Type            Formats accepted         Return format   Notes
+=============== ======================== =============== 
==============================================================
+ ``ascii``       string                   string          Uses JSON's ``\u`` 
character escape
+ ``bigint``      integer, string          integer         String must be valid 
64 bit integer
+ ``blob``        string                   string          String should be 0x 
followed by an even number of hex digits
+ ``boolean``     boolean, string          boolean         String must be 
"true" or "false"
+ ``date``        string                   string          Date in format 
``YYYY-MM-DD``, timezone UTC
+ ``decimal``     integer, float, string   float           May exceed 32 or 
64-bit IEEE-754 floating point precision in
+                                                          client-side decoder
+ ``double``      integer, float, string   float           String must be valid 
integer or float
+ ``float``       integer, float, string   float           String must be valid 
integer or float
+ ``inet``        string                   string          IPv4 or IPv6 address
+ ``int``         integer, string          integer         String must be valid 
32 bit integer
+ ``list``        list, string             list            Uses JSON's native 
list representation
+ ``map``         map, string              map             Uses JSON's native 
map representation
+ ``smallint``    integer, string          integer         String must be valid 
16 bit integer
+ ``set``         list, string             list            Uses JSON's native 
list representation
+ ``text``        string                   string          Uses JSON's ``\u`` 
character escape
+ ``time``        string                   string          Time of day in 
format ``HH-MM-SS[.fffffffff]``
+ ``timestamp``   integer, string          string          A timestamp. Strings 
constant allows to input :ref:`timestamps
+                                                          as dates 
<timestamps>`. Datestamps with format ``YYYY-MM-DD
+                                                          HH:MM:SS.SSS`` are 
returned.
+ ``timeuuid``    string                   string          Type 1 UUID. See 
:token:`constant` for the UUID format
+ ``tinyint``     integer, string          integer         String must be valid 
8 bit integer
+ ``tuple``       list, string             list            Uses JSON's native 
list representation
+ ``UDT``         map, string              map             Uses JSON's native 
map representation with field names as keys
+ ``uuid``        string                   string          See 
:token:`constant` for the UUID format
+ ``varchar``     string                   string          Uses JSON's ``\u`` 
character escape
+ ``varint``      integer, string          integer         Variable length; may 
overflow 32 or 64 bit integers in
+                                                          client-side decoder
+=============== ======================== =============== 
==============================================================
+
+The fromJson() Function
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``fromJson()`` function may be used similarly to ``INSERT JSON``, but for 
a single column value. It may only be used
+in the ``VALUES`` clause of an ``INSERT`` statement or as one of the column 
values in an ``UPDATE``, ``DELETE``, or
+``SELECT`` statement. For example, it cannot be used in the selection clause 
of a ``SELECT`` statement.
+
+The toJson() Function
+^^^^^^^^^^^^^^^^^^^^^
+
+The ``toJson()`` function may be used similarly to ``SELECT JSON``, but for a 
single column value. It may only be used
+in the selection clause of a ``SELECT`` statement.

Added: cassandra/site/src/doc/3.10/_sources/cql/mvs.txt
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/doc/3.10/_sources/cql/mvs.txt?rev=1757419&view=auto
==============================================================================
--- cassandra/site/src/doc/3.10/_sources/cql/mvs.txt (added)
+++ cassandra/site/src/doc/3.10/_sources/cql/mvs.txt Tue Aug 23 19:25:17 2016
@@ -0,0 +1,166 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+.. _materialized-views:
+
+Materialized Views
+------------------
+
+Materialized views names are defined by:
+
+.. productionlist::
+   view_name: re('[a-zA-Z_0-9]+')
+
+
+.. _create-materialized-view-statement:
+
+CREATE MATERIALIZED VIEW
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can create a materialized view on a table using a ``CREATE MATERIALIZED 
VIEW`` statement:
+
+.. productionlist::
+   create_materialized_view_statement: CREATE MATERIALIZED VIEW [ IF NOT 
EXISTS ] `view_name` AS
+                                     :     `select_statement`
+                                     :     PRIMARY KEY '(' `primary_key` ')'
+                                     :     WITH `table_options`
+
+For instance::
+
+    CREATE MATERIALIZED VIEW monkeySpecies_by_population AS
+        SELECT * FROM monkeySpecies
+        WHERE population IS NOT NULL AND species IS NOT NULL
+        PRIMARY KEY (population, species)
+        WITH comment='Allow query by population instead of species';
+
+The ``CREATE MATERIALIZED VIEW`` statement creates a new materialized view. 
Each such view is a set of *rows* which
+corresponds to rows which are present in the underlying, or base, table 
specified in the ``SELECT`` statement. A
+materialized view cannot be directly updated, but updates to the base table 
will cause corresponding updates in the
+view.
+
+Creating a materialized view has 3 main parts:
+
+- The :ref:`select statement <mv-select>` that restrict the data included in 
the view.
+- The :ref:`primary key <mv-primary-key>` definition for the view.
+- The :ref:`options <mv-options>` for the view.
+
+Attempting to create an already existing materialized view will return an 
error unless the ``IF NOT EXISTS`` option is
+used. If it is used, the statement will be a no-op if the materialized view 
already exists.
+
+.. _mv-select:
+
+MV select statement
+```````````````````
+
+The select statement of a materialized view creation defines which of the base 
table is included in the view. That
+statement is limited in a number of ways:
+
+- the :ref:`selection <selection-clause>` is limited to those that only select 
columns of the base table. In other
+  words, you can't use any function (aggregate or not), casting, term, etc. 
Aliases are also not supported. You can
+  however use `*` as a shortcut of selecting all columns. Further, 
:ref:`static columns <static-columns>` cannot be
+  included in a materialized view (which means ``SELECT *`` isn't allowed if 
the base table has static columns).
+- the ``WHERE`` clause have the following restrictions:
+
+  - it cannot include any :token:`bind_marker`.
+  - the columns that are not part of the *base table* primary key can only be 
restricted by an ``IS NOT NULL``
+    restriction. No other restriction is allowed.
+  - as the columns that are part of the *view* primary key cannot be null, 
they must always be at least restricted by a
+    ``IS NOT NULL`` restriction (or any other restriction, but they must have 
one).
+
+- it cannot have neither an :ref:`ordering clause <ordering-clause>`, nor a 
:ref:`limit <limit-clause>`, nor :ref:`ALLOW
+  FILTERING <allow-filtering>`.
+
+.. _mv-primary-key:
+
+MV primary key
+``````````````
+
+A view must have a primary key and that primary key must conform to the 
following restrictions:
+
+- it must contain all the primary key columns of the base table. This ensures 
that every row of the view correspond to
+  exactly one row of the base table.
+- it can only contain a single column that is not a primary key column in the 
base table.
+
+So for instance, give the following base table definition::
+
+    CREATE TABLE t (
+        k int,
+        c1 int,
+        c2 int,
+        v1 int,
+        v2 int,
+        PRIMARY KEY (k, c1, c2)
+    )
+
+then the following view definitions are allowed::
+
+    CREATE MATERIALIZED VIEW mv1 AS
+        SELECT * FROM t WHERE k IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT 
NULL
+        PRIMARY KEY (c1, k, c2)
+
+    CREATE MATERIALIZED VIEW mv1 AS
+        SELECT * FROM t WHERE k IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT 
NULL
+        PRIMARY KEY (v1, k, c1, c2)
+
+but the following ones are **not** allowed::
+
+    // Error: cannot include both v1 and v2 in the primary key as both are not 
in the base table primary key
+    CREATE MATERIALIZED VIEW mv1 AS
+        SELECT * FROM t WHERE k IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT 
NULL AND v1 IS NOT NULL
+        PRIMARY KEY (v1, v2, k, c1, c2)
+
+    // Error: must include k in the primary as it's a base table primary key 
column
+    CREATE MATERIALIZED VIEW mv1 AS
+        SELECT * FROM t WHERE c1 IS NOT NULL AND c2 IS NOT NULL
+        PRIMARY KEY (c1, c2)
+
+
+.. _mv-options:
+
+MV options
+``````````
+
+A materialized view is internally implemented by a table and as such, creating 
a MV allows the :ref:`same options than
+creating a table <create-table-options>`.
+
+
+.. _alter-materialized-view-statement:
+
+ALTER MATERIALIZED VIEW
+^^^^^^^^^^^^^^^^^^^^^^^
+
+After creation, you can alter the options of a materialized view using the 
``ALTER MATERIALIZED VIEW`` statement:
+
+.. productionlist::
+   alter_materialized_view_statement: ALTER MATERIALIZED VIEW `view_name` WITH 
`table_options`
+
+The options that can be updated are the same than at creation time and thus 
the :ref:`same than for tables
+<create-table-options>`.
+
+.. _drop-materialized-view-statement:
+
+DROP MATERIALIZED VIEW
+^^^^^^^^^^^^^^^^^^^^^^
+
+Dropping a materialized view users the ``DROP MATERIALIZED VIEW`` statement:
+
+.. productionlist::
+   drop_materialized_view_statement: DROP MATERIALIZED VIEW [ IF EXISTS ] 
`view_name`;
+
+If the materialized view does not exists, the statement will return an error, 
unless ``IF EXISTS`` is used in which case
+the operation is a no-op.

Added: cassandra/site/src/doc/3.10/_sources/cql/security.txt
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/doc/3.10/_sources/cql/security.txt?rev=1757419&view=auto
==============================================================================
--- cassandra/site/src/doc/3.10/_sources/cql/security.txt (added)
+++ cassandra/site/src/doc/3.10/_sources/cql/security.txt Tue Aug 23 19:25:17 
2016
@@ -0,0 +1,502 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+.. _cql-security:
+
+Security
+--------
+
+.. _cql-roles:
+
+Database Roles
+^^^^^^^^^^^^^^
+
+CQL uses database roles to represent users and group of users. Syntactically, 
a role is defined by:
+
+.. productionlist::
+   role_name: `identifier` | `string`
+
+.. _create-role-statement:
+
+CREATE ROLE
+~~~~~~~~~~~
+
+Creating a role uses the ``CREATE ROLE`` statement:
+
+.. productionlist::
+   create_role_statement: CREATE ROLE [ IF NOT EXISTS ] `role_name`
+                        :     [ WITH `role_options` ]
+   role_options: `role_option` ( AND `role_option` )*
+   role_option: PASSWORD '=' `string`
+              :| LOGIN '=' `boolean`
+              :| SUPERUSER '=' `boolean`
+              :| OPTIONS '=' `map_literal`
+
+For instance::
+
+    CREATE ROLE new_role;
+    CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true;
+    CREATE ROLE bob WITH PASSWORD = 'password_b' AND LOGIN = true AND 
SUPERUSER = true;
+    CREATE ROLE carlos WITH OPTIONS = { 'custom_option1' : 'option1_value', 
'custom_option2' : 99 };
+
+By default roles do not possess ``LOGIN`` privileges or ``SUPERUSER`` status.
+
+:ref:`Permissions <cql-permissions>` on database resources are granted to 
roles; types of resources include keyspaces,
+tables, functions and roles themselves. Roles may be granted to other roles to 
create hierarchical permissions
+structures; in these hierarchies, permissions and ``SUPERUSER`` status are 
inherited, but the ``LOGIN`` privilege is
+not.
+
+If a role has the ``LOGIN`` privilege, clients may identify as that role when 
connecting. For the duration of that
+connection, the client will acquire any roles and privileges granted to that 
role.
+
+Only a client with with the ``CREATE`` permission on the database roles 
resource may issue ``CREATE ROLE`` requests (see
+the :ref:`relevant section <cql-permissions>` below), unless the client is a 
``SUPERUSER``. Role management in Cassandra
+is pluggable and custom implementations may support only a subset of the 
listed options.
+
+Role names should be quoted if they contain non-alphanumeric characters.
+
+.. _setting-credentials-for-internal-authentication:
+
+Setting credentials for internal authentication
+```````````````````````````````````````````````
+
+Use the ``WITH PASSWORD`` clause to set a password for internal 
authentication, enclosing the password in single
+quotation marks.
+
+If internal authentication has not been set up or the role does not have 
``LOGIN`` privileges, the ``WITH PASSWORD``
+clause is not necessary.
+
+Creating a role conditionally
+`````````````````````````````
+
+Attempting to create an existing role results in an invalid query condition 
unless the ``IF NOT EXISTS`` option is used.
+If the option is used and the role exists, the statement is a no-op::
+
+    CREATE ROLE other_role;
+    CREATE ROLE IF NOT EXISTS other_role;
+
+
+.. _alter-role-statement:
+
+ALTER ROLE
+~~~~~~~~~~
+
+Altering a role options uses the ``ALTER ROLE`` statement:
+
+.. productionlist::
+   alter_role_statement: ALTER ROLE `role_name` WITH `role_options`
+
+For instance::
+
+    ALTER ROLE bob WITH PASSWORD = 'PASSWORD_B' AND SUPERUSER = false;
+
+Conditions on executing ``ALTER ROLE`` statements:
+
+-  A client must have ``SUPERUSER`` status to alter the ``SUPERUSER`` status 
of another role
+-  A client cannot alter the ``SUPERUSER`` status of any role it currently 
holds
+-  A client can only modify certain properties of the role with which it 
identified at login (e.g. ``PASSWORD``)
+-  To modify properties of a role, the client must be granted ``ALTER`` 
:ref:`permission <cql-permissions>` on that role
+
+.. _drop-role-statement:
+
+DROP ROLE
+~~~~~~~~~
+
+Dropping a role uses the ``DROP ROLE`` statement:
+
+.. productionlist::
+   drop_role_statement: DROP ROLE [ IF EXISTS ] `role_name`
+
+``DROP ROLE`` requires the client to have ``DROP`` :ref:`permission 
<cql-permissions>` on the role in question. In
+addition, client may not ``DROP`` the role with which it identified at login. 
Finally, only a client with ``SUPERUSER``
+status may ``DROP`` another ``SUPERUSER`` role.
+
+Attempting to drop a role which does not exist results in an invalid query 
condition unless the ``IF EXISTS`` option is
+used. If the option is used and the role does not exist the statement is a 
no-op.
+
+.. _grant-role-statement:
+
+GRANT ROLE
+~~~~~~~~~~
+
+Granting a role to another uses the ``GRANT ROLE`` statement:
+
+.. productionlist::
+   grant_role_statement: GRANT `role_name` TO `role_name`
+
+For instance::
+
+    GRANT report_writer TO alice;
+
+This statement grants the ``report_writer`` role to ``alice``. Any permissions 
granted to ``report_writer`` are also
+acquired by ``alice``.
+
+Roles are modelled as a directed acyclic graph, so circular grants are not 
permitted. The following examples result in
+error conditions::
+
+    GRANT role_a TO role_b;
+    GRANT role_b TO role_a;
+
+    GRANT role_a TO role_b;
+    GRANT role_b TO role_c;
+    GRANT role_c TO role_a;
+
+.. _revoke-role-statement:
+
+REVOKE ROLE
+~~~~~~~~~~~
+
+Revoking a role uses the ``REVOKE ROLE`` statement:
+
+.. productionlist::
+   revoke_role_statement: REVOKE `role_name` FROM `role_name`
+
+For instance::
+
+    REVOKE report_writer FROM alice;
+
+This statement revokes the ``report_writer`` role from ``alice``. Any 
permissions that ``alice`` has acquired via the
+``report_writer`` role are also revoked.
+
+.. _list-roles-statement:
+
+LIST ROLES
+~~~~~~~~~~
+
+All the known roles (in the system or granted to specific role) can be listed 
using the ``LIST ROLES`` statement:
+
+.. productionlist::
+   list_roles_statement: LIST ROLES [ OF `role_name` ] [ NORECURSIVE ]
+
+For instance::
+
+    LIST ROLES;
+
+returns all known roles in the system, this requires ``DESCRIBE`` permission 
on the database roles resource. And::
+
+    LIST ROLES OF alice;
+
+enumerates all roles granted to ``alice``, including those transitively 
acquired. But::
+
+    LIST ROLES OF bob NORECURSIVE
+
+lists all roles directly granted to ``bob`` without including any of the 
transitively acquired ones.
+
+Users
+^^^^^
+
+Prior to the introduction of roles in Cassandra 2.2, authentication and 
authorization were based around the concept of a
+``USER``. For backward compatibility, the legacy syntax has been preserved 
with ``USER`` centric statements becoming
+synonyms for the ``ROLE`` based equivalents. In other words, creating/updating 
a user is just a different syntax for
+creating/updating a role.
+
+.. _create-user-statement:
+
+CREATE USER
+~~~~~~~~~~~
+
+Creating a user uses the ``CREATE USER`` statement:
+
+.. productionlist::
+   create_user_statement: CREATE USER [ IF NOT EXISTS ] `role_name` [ WITH 
PASSWORD `string` ] [ `user_option` ]
+   user_option: SUPERUSER | NOSUPERUSER
+
+For instance::
+
+    CREATE USER alice WITH PASSWORD 'password_a' SUPERUSER;
+    CREATE USER bob WITH PASSWORD 'password_b' NOSUPERUSER;
+
+``CREATE USER`` is equivalent to ``CREATE ROLE`` where the ``LOGIN`` option is 
``true``. So, the following pairs of
+statements are equivalent::
+
+    CREATE USER alice WITH PASSWORD 'password_a' SUPERUSER;
+    CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true AND 
SUPERUSER = true;
+
+    CREATE USER IF EXISTS alice WITH PASSWORD 'password_a' SUPERUSER;
+    CREATE ROLE IF EXISTS alice WITH PASSWORD = 'password_a' AND LOGIN = true 
AND SUPERUSER = true;
+
+    CREATE USER alice WITH PASSWORD 'password_a' NOSUPERUSER;
+    CREATE ROLE alice WITH PASSWORD = 'password_a' AND LOGIN = true AND 
SUPERUSER = false;
+
+    CREATE USER alice WITH PASSWORD 'password_a' NOSUPERUSER;
+    CREATE ROLE alice WITH PASSWORD = 'password_a' WITH LOGIN = true;
+
+    CREATE USER alice WITH PASSWORD 'password_a';
+    CREATE ROLE alice WITH PASSWORD = 'password_a' WITH LOGIN = true;
+
+.. _alter-user-statement:
+
+ALTER USER
+~~~~~~~~~~
+
+Altering the options of a user uses the ``ALTER USER`` statement:
+
+.. productionlist::
+   alter_user_statement: ALTER USER `role_name` [ WITH PASSWORD `string` ] [ 
`user_option` ]
+
+For instance::
+
+    ALTER USER alice WITH PASSWORD 'PASSWORD_A';
+    ALTER USER bob SUPERUSER;
+
+.. _drop-user-statement:
+
+DROP USER
+~~~~~~~~~
+
+Dropping a user uses the ``DROP USER`` statement:
+
+.. productionlist::
+   drop_user_statement: DROP USER [ IF EXISTS ] `role_name`
+
+.. _list-users-statement:
+
+LIST USERS
+~~~~~~~~~~
+
+Existing users can be listed using the ``LIST USERS`` statement:
+
+.. productionlist::
+   list_users_statement: LIST USERS
+
+Note that this statement is equivalent to::
+
+    LIST ROLES;
+
+but only roles with the ``LOGIN`` privilege are included in the output.
+
+Data Control
+^^^^^^^^^^^^
+
+.. _cql-permissions:
+
+Permissions
+~~~~~~~~~~~
+
+Permissions on resources are granted to roles; there are several different 
types of resources in Cassandra and each type
+is modelled hierarchically:
+
+- The hierarchy of Data resources, Keyspaces and Tables has the structure 
``ALL KEYSPACES`` -> ``KEYSPACE`` ->
+  ``TABLE``.
+- Function resources have the structure ``ALL FUNCTIONS`` -> ``KEYSPACE`` -> 
``FUNCTION``
+- Resources representing roles have the structure ``ALL ROLES`` -> ``ROLE``
+- Resources representing JMX ObjectNames, which map to sets of MBeans/MXBeans, 
have the structure ``ALL MBEANS`` ->
+  ``MBEAN``
+
+Permissions can be granted at any level of these hierarchies and they flow 
downwards. So granting a permission on a
+resource higher up the chain automatically grants that same permission on all 
resources lower down. For example,
+granting ``SELECT`` on a ``KEYSPACE`` automatically grants it on all 
``TABLES`` in that ``KEYSPACE``. Likewise, granting
+a permission on ``ALL FUNCTIONS`` grants it on every defined function, 
regardless of which keyspace it is scoped in. It
+is also possible to grant permissions on all functions scoped to a particular 
keyspace.
+
+Modifications to permissions are visible to existing client sessions; that is, 
connections need not be re-established
+following permissions changes.
+
+The full set of available permissions is:
+
+- ``CREATE``
+- ``ALTER``
+- ``DROP``
+- ``SELECT``
+- ``MODIFY``
+- ``AUTHORIZE``
+- ``DESCRIBE``
+- ``EXECUTE``
+
+Not all permissions are applicable to every type of resource. For instance, 
``EXECUTE`` is only relevant in the context
+of functions or mbeans; granting ``EXECUTE`` on a resource representing a 
table is nonsensical. Attempting to ``GRANT``
+a permission on resource to which it cannot be applied results in an error 
response. The following illustrates which
+permissions can be granted on which types of resource, and which statements 
are enabled by that permission.
+
+=============== =============================== 
=======================================================================
+ Permission      Resource                        Operations
+=============== =============================== 
=======================================================================
+ ``CREATE``      ``ALL KEYSPACES``               ``CREATE KEYSPACE`` and 
``CREATE TABLE`` in any keyspace
+ ``CREATE``      ``KEYSPACE``                    ``CREATE TABLE`` in specified 
keyspace
+ ``CREATE``      ``ALL FUNCTIONS``               ``CREATE FUNCTION`` in any 
keyspace and ``CREATE AGGREGATE`` in any
+                                                 keyspace
+ ``CREATE``      ``ALL FUNCTIONS IN KEYSPACE``   ``CREATE FUNCTION`` and 
``CREATE AGGREGATE`` in specified keyspace
+ ``CREATE``      ``ALL ROLES``                   ``CREATE ROLE``
+ ``ALTER``       ``ALL KEYSPACES``               ``ALTER KEYSPACE`` and 
``ALTER TABLE`` in any keyspace
+ ``ALTER``       ``KEYSPACE``                    ``ALTER KEYSPACE`` and 
``ALTER TABLE`` in specified keyspace
+ ``ALTER``       ``TABLE``                       ``ALTER TABLE``
+ ``ALTER``       ``ALL FUNCTIONS``               ``CREATE FUNCTION`` and 
``CREATE AGGREGATE``: replacing any existing
+ ``ALTER``       ``ALL FUNCTIONS IN KEYSPACE``   ``CREATE FUNCTION`` and 
``CREATE AGGREGATE``: replacing existing in
+                                                 specified keyspace
+ ``ALTER``       ``FUNCTION``                    ``CREATE FUNCTION`` and 
``CREATE AGGREGATE``: replacing existing
+ ``ALTER``       ``ALL ROLES``                   ``ALTER ROLE`` on any role
+ ``ALTER``       ``ROLE``                        ``ALTER ROLE``
+ ``DROP``        ``ALL KEYSPACES``               ``DROP KEYSPACE`` and ``DROP 
TABLE`` in any keyspace
+ ``DROP``        ``KEYSPACE``                    ``DROP TABLE`` in specified 
keyspace
+ ``DROP``        ``TABLE``                       ``DROP TABLE``
+ ``DROP``        ``ALL FUNCTIONS``               ``DROP FUNCTION`` and ``DROP 
AGGREGATE`` in any keyspace
+ ``DROP``        ``ALL FUNCTIONS IN KEYSPACE``   ``DROP FUNCTION`` and ``DROP 
AGGREGATE`` in specified keyspace
+ ``DROP``        ``FUNCTION``                    ``DROP FUNCTION``
+ ``DROP``        ``ALL ROLES``                   ``DROP ROLE`` on any role
+ ``DROP``        ``ROLE``                        ``DROP ROLE``
+ ``SELECT``      ``ALL KEYSPACES``               ``SELECT`` on any table
+ ``SELECT``      ``KEYSPACE``                    ``SELECT`` on any table in 
specified keyspace
+ ``SELECT``      ``TABLE``                       ``SELECT`` on specified table
+ ``SELECT``      ``ALL MBEANS``                  Call getter methods on any 
mbean
+ ``SELECT``      ``MBEANS``                      Call getter methods on any 
mbean matching a wildcard pattern
+ ``SELECT``      ``MBEAN``                       Call getter methods on named 
mbean
+ ``MODIFY``      ``ALL KEYSPACES``               ``INSERT``, ``UPDATE``, 
``DELETE`` and ``TRUNCATE`` on any table
+ ``MODIFY``      ``KEYSPACE``                    ``INSERT``, ``UPDATE``, 
``DELETE`` and ``TRUNCATE`` on any table in
+                                                 specified keyspace
+ ``MODIFY``      ``TABLE``                       ``INSERT``, ``UPDATE``, 
``DELETE`` and ``TRUNCATE`` on specified table
+ ``MODIFY``      ``ALL MBEANS``                  Call setter methods on any 
mbean
+ ``MODIFY``      ``MBEANS``                      Call setter methods on any 
mbean matching a wildcard pattern
+ ``MODIFY``      ``MBEAN``                       Call setter methods on named 
mbean
+ ``AUTHORIZE``   ``ALL KEYSPACES``               ``GRANT PERMISSION`` and 
``REVOKE PERMISSION`` on any table
+ ``AUTHORIZE``   ``KEYSPACE``                    ``GRANT PERMISSION`` and 
``REVOKE PERMISSION`` on any table in
+                                                 specified keyspace
+ ``AUTHORIZE``   ``TABLE``                       ``GRANT PERMISSION`` and 
``REVOKE PERMISSION`` on specified table
+ ``AUTHORIZE``   ``ALL FUNCTIONS``               ``GRANT PERMISSION`` and 
``REVOKE PERMISSION`` on any function
+ ``AUTHORIZE``   ``ALL FUNCTIONS IN KEYSPACE``   ``GRANT PERMISSION`` and 
``REVOKE PERMISSION`` in specified keyspace
+ ``AUTHORIZE``   ``FUNCTION``                    ``GRANT PERMISSION`` and 
``REVOKE PERMISSION`` on specified function
+ ``AUTHORIZE``   ``ALL MBEANS``                  ``GRANT PERMISSION`` and 
``REVOKE PERMISSION`` on any mbean
+ ``AUTHORIZE``   ``MBEANS``                      ``GRANT PERMISSION`` and 
``REVOKE PERMISSION`` on any mbean matching
+                                                 a wildcard pattern
+ ``AUTHORIZE``   ``MBEAN``                       ``GRANT PERMISSION`` and 
``REVOKE PERMISSION`` on named mbean
+ ``AUTHORIZE``   ``ALL ROLES``                   ``GRANT ROLE`` and ``REVOKE 
ROLE`` on any role
+ ``AUTHORIZE``   ``ROLES``                       ``GRANT ROLE`` and ``REVOKE 
ROLE`` on specified roles
+ ``DESCRIBE``    ``ALL ROLES``                   ``LIST ROLES`` on all roles 
or only roles granted to another,
+                                                 specified role
+ ``DESCRIBE``    ``ALL MBEANS``                  Retrieve metadata about any 
mbean from the platform's MBeanServer
+ ``DESCRIBE``    ``MBEANS``                      Retrieve metadata about any 
mbean matching a wildcard patter from the
+                                                 platform's MBeanServer
+ ``DESCRIBE``    ``MBEAN``                       Retrieve metadata about a 
named mbean from the platform's MBeanServer
+ ``EXECUTE``     ``ALL FUNCTIONS``               ``SELECT``, ``INSERT`` and 
``UPDATE`` using any function, and use of
+                                                 any function in ``CREATE 
AGGREGATE``
+ ``EXECUTE``     ``ALL FUNCTIONS IN KEYSPACE``   ``SELECT``, ``INSERT`` and 
``UPDATE`` using any function in specified
+                                                 keyspace and use of any 
function in keyspace in ``CREATE AGGREGATE``
+ ``EXECUTE``     ``FUNCTION``                    ``SELECT``, ``INSERT`` and 
``UPDATE`` using specified function and use
+                                                 of the function in ``CREATE 
AGGREGATE``
+ ``EXECUTE``     ``ALL MBEANS``                  Execute operations on any 
mbean
+ ``EXECUTE``     ``MBEANS``                      Execute operations on any 
mbean matching a wildcard pattern
+ ``EXECUTE``     ``MBEAN``                       Execute operations on named 
mbean
+=============== =============================== 
=======================================================================
+
+.. _grant-permission-statement:
+
+GRANT PERMISSION
+~~~~~~~~~~~~~~~~
+
+Granting a permission uses the ``GRANT PERMISSION`` statement:
+
+.. productionlist::
+   grant_permission_statement: GRANT `permissions` ON `resource` TO `role_name`
+   permissions: ALL [ PERMISSIONS ] | `permission` [ PERMISSION ]
+   permission: CREATE | ALTER | DROP | SELECT | MODIFY | AUTHORIZE | DESCRIBE 
| EXECUTE
+   resource: ALL KEYSPACES
+           :| KEYSPACE `keyspace_name`
+           :| [ TABLE ] `table_name`
+           :| ALL ROLES
+           :| ROLE `role_name`
+           :| ALL FUNCTIONS [ IN KEYSPACE `keyspace_name` ]
+           :| FUNCTION `function_name` '(' [ `cql_type` ( ',' `cql_type` )* ] 
')'
+           :| ALL MBEANS
+           :| ( MBEAN | MBEANS ) `string`
+
+For instance::
+
+    GRANT SELECT ON ALL KEYSPACES TO data_reader;
+
+This gives any user with the role ``data_reader`` permission to execute 
``SELECT`` statements on any table across all
+keyspaces::
+
+    GRANT MODIFY ON KEYSPACE keyspace1 TO data_writer;
+
+This give any user with the role ``data_writer`` permission to perform 
``UPDATE``, ``INSERT``, ``UPDATE``, ``DELETE``
+and ``TRUNCATE`` queries on all tables in the ``keyspace1`` keyspace::
+
+    GRANT DROP ON keyspace1.table1 TO schema_owner;
+
+This gives any user with the ``schema_owner`` role permissions to ``DROP`` 
``keyspace1.table1``::
+
+    GRANT EXECUTE ON FUNCTION keyspace1.user_function( int ) TO report_writer;
+
+This grants any user with the ``report_writer`` role permission to execute 
``SELECT``, ``INSERT`` and ``UPDATE`` queries
+which use the function ``keyspace1.user_function( int )``::
+
+    GRANT DESCRIBE ON ALL ROLES TO role_admin;
+
+This grants any user with the ``role_admin`` role permission to view any and 
all roles in the system with a ``LIST
+ROLES`` statement
+
+.. _grant-all:
+
+GRANT ALL
+`````````
+
+When the ``GRANT ALL`` form is used, the appropriate set of permissions is 
determined automatically based on the target
+resource.
+
+Automatic Granting
+``````````````````
+
+When a resource is created, via a ``CREATE KEYSPACE``, ``CREATE TABLE``, 
``CREATE FUNCTION``, ``CREATE AGGREGATE`` or
+``CREATE ROLE`` statement, the creator (the role the database user who issues 
the statement is identified as), is
+automatically granted all applicable permissions on the new resource.
+
+.. _revoke-permission-statement:
+
+REVOKE PERMISSION
+~~~~~~~~~~~~~~~~~
+
+Revoking a permission from a role uses the ``REVOKE PERMISSION`` statement:
+
+.. productionlist::
+   revoke_permission_statement: REVOKE `permissions` ON `resource` FROM 
`role_name`
+
+For instance::
+
+    REVOKE SELECT ON ALL KEYSPACES FROM data_reader;
+    REVOKE MODIFY ON KEYSPACE keyspace1 FROM data_writer;
+    REVOKE DROP ON keyspace1.table1 FROM schema_owner;
+    REVOKE EXECUTE ON FUNCTION keyspace1.user_function( int ) FROM 
report_writer;
+    REVOKE DESCRIBE ON ALL ROLES FROM role_admin;
+
+.. _list-permissions-statement:
+
+LIST PERMISSIONS
+~~~~~~~~~~~~~~~~
+
+Listing granted permissions uses the ``LIST PERMISSIONS`` statement:
+
+.. productionlist::
+   list_permissions_statement: LIST `permissions` [ ON `resource` ] [ OF 
`role_name` [ NORECURSIVE ] ]
+
+For instance::
+
+    LIST ALL PERMISSIONS OF alice;
+
+Show all permissions granted to ``alice``, including those acquired 
transitively from any other roles::
+
+    LIST ALL PERMISSIONS ON keyspace1.table1 OF bob;
+
+Show all permissions on ``keyspace1.table1`` granted to ``bob``, including 
those acquired transitively from any other
+roles. This also includes any permissions higher up the resource hierarchy 
which can be applied to ``keyspace1.table1``.
+For example, should ``bob`` have ``ALTER`` permission on ``keyspace1``, that 
would be included in the results of this
+query. Adding the ``NORECURSIVE`` switch restricts the results to only those 
permissions which were directly granted to
+``bob`` or one of ``bob``'s roles::
+
+    LIST SELECT PERMISSIONS OF carlos;
+
+Show any permissions granted to ``carlos`` or any of ``carlos``'s roles, 
limited to ``SELECT`` permissions on any
+resource.

Added: cassandra/site/src/doc/3.10/_sources/cql/triggers.txt
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/doc/3.10/_sources/cql/triggers.txt?rev=1757419&view=auto
==============================================================================
--- cassandra/site/src/doc/3.10/_sources/cql/triggers.txt (added)
+++ cassandra/site/src/doc/3.10/_sources/cql/triggers.txt Tue Aug 23 19:25:17 
2016
@@ -0,0 +1,63 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..     http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS,
+.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+.. See the License for the specific language governing permissions and
+.. limitations under the License.
+
+.. highlight:: cql
+
+.. _cql-triggers:
+
+Triggers
+--------
+
+Triggers are identified by a name defined by:
+
+.. productionlist::
+   trigger_name: `identifier`
+
+
+.. _create-trigger-statement:
+
+CREATE TRIGGER
+^^^^^^^^^^^^^^
+
+Creating a new trigger uses the ``CREATE TRIGGER`` statement:
+
+.. productionlist::
+   create_trigger_statement: CREATE TRIGGER [ IF NOT EXISTS ] `trigger_name`
+                           :     ON `table_name`
+                           :     USING `string`
+
+For instance::
+
+    CREATE TRIGGER myTrigger ON myTable USING 
'org.apache.cassandra.triggers.InvertedIndex';
+
+The actual logic that makes up the trigger can be written in any Java (JVM) 
language and exists outside the database.
+You place the trigger code in a ``lib/triggers`` subdirectory of the Cassandra 
installation directory, it loads during
+cluster startup, and exists on every node that participates in a cluster. The 
trigger defined on a table fires before a
+requested DML statement occurs, which ensures the atomicity of the transaction.
+
+.. _drop-trigger-statement:
+
+DROP TRIGGER
+^^^^^^^^^^^^
+
+Dropping a trigger uses the ``DROP TRIGGER`` statement:
+
+.. productionlist::
+   drop_trigger_statement: DROP TRIGGER [ IF EXISTS ] `trigger_name` ON 
`table_name`
+
+For instance::
+
+    DROP TRIGGER myTrigger ON myTable;


Reply via email to