Author: slebresne
Date: Mon Mar 10 17:03:32 2014
New Revision: 1576002

URL: http://svn.apache.org/r1576002
Log:
Update CQL doc

Modified:
    cassandra/site/publish/doc/cql3/CQL-1.2.html
    cassandra/site/publish/doc/cql3/CQL-2.0.html

Modified: cassandra/site/publish/doc/cql3/CQL-1.2.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/doc/cql3/CQL-1.2.html?rev=1576002&r1=1576001&r2=1576002&view=diff
==============================================================================
--- cassandra/site/publish/doc/cql3/CQL-1.2.html (original)
+++ cassandra/site/publish/doc/cql3/CQL-1.2.html Mon Mar 10 17:03:32 2014
@@ -59,9 +59,6 @@ CREATE KEYSPACE Excalibur
 <column-definition> ::= <identifier> <type> ( PRIMARY KEY )?
                       | PRIMARY KEY '(' <partition-key> ( ',' 
<identifier> )* ')'
 
-<partition-key> ::= <partition-key>
-                  | '(' <partition-key> ( ',' <identifier> )* ')'
-
 <partition-key> ::= <identifier>
                   | '(' <identifier> (',' <identifier> )* ')'
 
@@ -93,7 +90,7 @@ CREATE TABLE timeline (
     other text,
     PRIMARY KEY (k)
 )
-</pre></pre><p>Moreover, a table must define at least one column that is not 
part of the PRIMARY KEY as a row exists in Cassandra only if it contains at 
least one value for one such column.</p><h4 
id="createTablepartitionClustering">Partition key and clustering 
columns</h4><p>In CQL, the order in which columns are defined for the 
<code>PRIMARY KEY</code> matters. The first column of the key is called the 
<i>partition key</i>. It has the property that all the rows sharing the same 
partition key (even across table in fact) are stored on the same physical node. 
Also, insertion/update/deletion on rows sharing the same partition key for a 
given table are performed <i>atomically</i> and in <i>isolation</i>. Note that 
it is possible to have a composite partition key, i.e. a partition key formed 
of multiple columns, using an extra set of parentheses to define which columns 
forms the partition key.</p><p>The remaining columns of the <code>PRIMARY 
KEY</code> definition, if any, are called __c
 lustering columns. On a given physical node, rows for a given partition key 
are stored in the order induced by the clustering columns, making the retrieval 
of rows in that clustering order particularly efficient (see <a 
href="#selectStmt"><tt>SELECT</tt></a>).</p><h4 
id="createTableOptions"><code>&lt;option></code></h4><p>The <code>CREATE 
TABLE</code> statement supports a number of options that controls the 
configuration of a new table. These options can be specified after the 
<code>WITH</code> keyword.</p><p>The first of these option is <code>COMPACT 
STORAGE</code>. This option is mainly targeted towards backward compatibility 
for definitions created before CQL3 (see <a 
href="http://www.datastax.com/dev/blog/thrift-to-cql3";>www.datastax.com/dev/blog/thrift-to-cql3</a>
 for more details).  The option also provides a slightly more compact layout of 
data on disk but at the price of diminished flexibility and extensibility for 
the table.  Most notably, <code>COMPACT STORAGE</code> table
 s cannot have collections and a <code>COMPACT STORAGE</code> table with at 
least one clustering column supports exactly one (as in not 0 nor more than 1) 
column not part of the <code>PRIMARY KEY</code> definition (which imply in 
particular that you cannot add nor remove columns after creation). For those 
reasons, <code>COMPACT STORAGE</code> is not recommended outside of the 
backward compatibility reason evoked above.</p><p>Another option is 
<code>CLUSTERING ORDER</code>. It allows to define the ordering of rows on 
disk. It takes the list of the clustering column names with, for each of them, 
the on-disk order (Ascending or descending). Note that this option affects <a 
href="#selectOrderBy">what <code>ORDER BY</code> are allowed during 
<code>SELECT</code></a>.</p><p>Table creation supports the following other 
<code>&lt;property></code>:</p><table><tr><th>option                    
</th><th>kind   </th><th>default   
</th><th>description</th></tr><tr><td><code>comment</code>           
          </td><td><em>simple</em> </td><td>none        </td><td>A free-form, 
human-readable comment.</td></tr><tr><td><code>read_repair_chance</code>        
 </td><td><em>simple</em> </td><td>0.1         </td><td>The probability with 
which to query extra nodes (e.g. more nodes than required by the consistency 
level) for the purpose of read 
repairs.</td></tr><tr><td><code>dclocal_read_repair_chance</code> 
</td><td><em>simple</em> </td><td>0           </td><td>The probability with 
which to query extra nodes (e.g. more nodes than required by the consistency 
level) belonging to the same data center than the read coordinator for the 
purpose of read repairs.</td></tr><tr><td><code>gc_grace_seconds</code>         
  </td><td><em>simple</em> </td><td>864000      </td><td>Time to wait before 
garbage collecting tombstones (deletion 
markers).</td></tr><tr><td><code>bloom_filter_fp_chance</code>     
</td><td><em>simple</em> </td><td>0.00075     </td><td>The target probability 
of false positive o
 f the sstable bloom filters. Said bloom filters will be sized to provide the 
provided probability (thus lowering this value impact the size of bloom filters 
in-memory and on-disk)</td></tr><tr><td><code>compaction</code>                 
</td><td><em>map</em>    </td><td><em>see below</em> </td><td>The compaction 
otpions to use, see below.</td></tr><tr><td><code>compression</code>            
    </td><td><em>map</em>    </td><td><em>see below</em> </td><td>Compression 
options, see below. </td></tr><tr><td><code>replicate_on_write</code>         
</td><td><em>simple</em> </td><td>true        </td><td>Whether to replicate 
data on write. This can only be set to false for tables with counters values. 
Disabling this is dangerous and can result in random lose of counters, 
don&#8217;t disable unless you are sure to know what you are 
doing</td></tr><tr><td><code>caching</code>                    
</td><td><em>simple</em> </td><td>keys_only   </td><td>Whether to cache keys 
(&#8220;key cache&#82
 21;) and/or rows (&#8220;row cache&#8221;) for this table. Valid values are: 
<code>all</code>, <code>keys_only</code>, <code>rows_only</code> and 
<code>none</code>. </td></tr></table><h4 
id="compactionOptions"><code>compaction</code> options</h4><p>The 
<code>compaction</code> property must at least define the <code>'class'</code> 
sub-option, that defines the compaction strategy class to use. The default 
supported class are <code>'SizeTieredCompactionStrategy'</code> and 
<code>'LeveledCompactionStrategy'</code>. Custom strategy can be provided by 
specifying the full class name as a <a href="#constants">string constant</a>. 
The rest of the sub-options depends on the chosen class. The sub-options 
supported by the default classes are:</p><table><tr><th>option                  
      </th><th>supported compaction strategy </th><th>default 
</th><th>description </th></tr><tr><td><code>tombstone_threshold</code>         
  </td><td><em>all</em>                           </td><td>0.2       </t
 d><td>A ratio such that if a sstable has more than this ratio of gcable 
tombstones over all contained columns, the sstable will be compacted (with no 
other sstables) for the purpose of purging those tombstones. 
</td></tr><tr><td><code>tombstone_compaction_interval</code> 
</td><td><em>all</em>                           </td><td>1 day     </td><td>The 
mininum time to wait after an sstable creation time before considering it for 
&#8220;tombstone compaction&#8221;, where &#8220;tombstone compaction&#8221; is 
the compaction triggered if the sstable has more gcable tombstones than 
<code>tombstone_threshold</code>. 
</td></tr><tr><td><code>min_sstable_size</code>              
</td><td>SizeTieredCompactionStrategy    </td><td>50MB      </td><td>The size 
tiered strategy groups SSTables to compact in buckets. A bucket groups SSTables 
that differs from less than 50% in size.  However, for small sizes, this would 
result in a bucketing that is too fine grained. <code>min_sstable_size</code> 
defin
 es a size threshold (in bytes) below which all SSTables belong to one unique 
bucket</td></tr><tr><td><code>min_threshold</code>                 
</td><td>SizeTieredCompactionStrategy    </td><td>4         </td><td>Minimum 
number of SSTables needed to start a minor 
compaction.</td></tr><tr><td><code>max_threshold</code>                 
</td><td>SizeTieredCompactionStrategy    </td><td>32        </td><td>Maximum 
number of SSTables processed by one minor 
compaction.</td></tr><tr><td><code>bucket_low</code>                    
</td><td>SizeTieredCompactionStrategy    </td><td>0.5       </td><td>Size 
tiered consider sstables to be within the same bucket if their size is within 
[average_size * <code>bucket_low</code>, average_size * 
<code>bucket_high</code> ] (i.e the default groups sstable whose sizes diverges 
by at most 50%)</td></tr><tr><td><code>bucket_high</code>                   
</td><td>SizeTieredCompactionStrategy    </td><td>1.5       </td><td>Size 
tiered consider sstables to be w
 ithin the same bucket if their size is within [average_size * 
<code>bucket_low</code>, average_size * <code>bucket_high</code> ] (i.e the 
default groups sstable whose sizes diverges by at most 
50%).</td></tr><tr><td><code>sstable_size_in_mb</code>            
</td><td>LeveledCompactionStrategy       </td><td>5MB       </td><td>The target 
size (in MB) for sstables in the leveled strategy. Note that while sstable 
sizes should stay less or equal to <code>sstable_size_in_mb</code>, it is 
possible to exceptionally have a larger sstable as during compaction, data for 
a given partition key are never split into 2 sstables</td></tr></table><p>For 
the <code>compression</code> property, the following default sub-options are 
available:</p><table><tr><th>option              </th><th>default        
</th><th>description </th></tr><tr><td><code>sstable_compression</code> 
</td><td>SnappyCompressor </td><td>The compression algorithm to use. Default 
compressor are: SnappyCompressor and DeflateCompresso
 r. Use an empty string (<code>''</code>) to disable compression. Custom 
compressor can be provided by specifying the full class name as a <a 
href="#constants">string 
constant</a>.</td></tr><tr><td><code>chunk_length_kb</code>     </td><td>64KB   
          </td><td>On disk SSTables are compressed by block (to allow random 
reads). This defines the size (in KB) of said block. Bigger values may improve 
the compression rate, but increases the minimum size of data to be read from 
disk for a read </td></tr><tr><td><code>crc_check_chance</code>    </td><td>1.0 
             </td><td>When compression is enabled, each compressed block 
includes a checksum of that block for the purpose of detecting disk bitrot and 
avoiding the propagation of corruption to other replica. This option defines 
the probability with which those checksums are checked during read. By default 
they are always checked. Set to 0 to disable checksum checking and to 0.5 for 
instance to check them every other read</td></tr></t
 able><h4 id="Otherconsiderations">Other considerations:</h4><ul><li>When <a 
href="#insertStmt/&quot;updating&quot;:#updateStmt">inserting</a> a given row, 
not all columns needs to be defined (except for those part of the key), and 
missing columns occupy no space on disk. Furthermore, adding new columns (see 
&lt;a href=#alterStmt><tt>ALTER TABLE</tt></a>) is a constant time operation. 
There is thus no need to try to anticipate future usage (or to cry when you 
haven&#8217;t) when creating a table.</li></ul><h3 id="alterTableStmt">ALTER 
TABLE</h3><p><i>Syntax:</i></p><pre class="syntax"><pre>&lt;alter-table-stmt> 
::= ALTER (TABLE | COLUMNFAMILY) &lt;tablename> &lt;instruction>
+</pre></pre><h4 id="createTablepartitionClustering">Partition key and 
clustering columns</h4><p>In CQL, the order in which columns are defined for 
the <code>PRIMARY KEY</code> matters. The first column of the key is called the 
<i>partition key</i>. It has the property that all the rows sharing the same 
partition key (even across table in fact) are stored on the same physical node. 
Also, insertion/update/deletion on rows sharing the same partition key for a 
given table are performed <i>atomically</i> and in <i>isolation</i>. Note that 
it is possible to have a composite partition key, i.e. a partition key formed 
of multiple columns, using an extra set of parentheses to define which columns 
forms the partition key.</p><p>The remaining columns of the <code>PRIMARY 
KEY</code> definition, if any, are called __clustering columns. On a given 
physical node, rows for a given partition key are stored in the order induced 
by the clustering columns, making the retrieval of rows in that clusterin
 g order particularly efficient (see <a 
href="#selectStmt"><tt>SELECT</tt></a>).</p><h4 
id="createTableOptions"><code>&lt;option></code></h4><p>The <code>CREATE 
TABLE</code> statement supports a number of options that controls the 
configuration of a new table. These options can be specified after the 
<code>WITH</code> keyword.</p><p>The first of these option is <code>COMPACT 
STORAGE</code>. This option is mainly targeted towards backward compatibility 
for definitions created before CQL3 (see <a 
href="http://www.datastax.com/dev/blog/thrift-to-cql3";>www.datastax.com/dev/blog/thrift-to-cql3</a>
 for more details).  The option also provides a slightly more compact layout of 
data on disk but at the price of diminished flexibility and extensibility for 
the table.  Most notably, <code>COMPACT STORAGE</code> tables cannot have 
collections and a <code>COMPACT STORAGE</code> table with at least one 
clustering column supports exactly one (as in not 0 nor more than 1) column not 
part of the <cod
 e>PRIMARY KEY</code> definition (which imply in particular that you cannot add 
nor remove columns after creation). For those reasons, <code>COMPACT 
STORAGE</code> is not recommended outside of the backward compatibility reason 
evoked above.</p><p>Another option is <code>CLUSTERING ORDER</code>. It allows 
to define the ordering of rows on disk. It takes the list of the clustering 
column names with, for each of them, the on-disk order (Ascending or 
descending). Note that this option affects <a href="#selectOrderBy">what 
<code>ORDER BY</code> are allowed during <code>SELECT</code></a>.</p><p>Table 
creation supports the following other 
<code>&lt;property></code>:</p><table><tr><th>option                    
</th><th>kind   </th><th>default   
</th><th>description</th></tr><tr><td><code>comment</code>                    
</td><td><em>simple</em> </td><td>none        </td><td>A free-form, 
human-readable comment.</td></tr><tr><td><code>read_repair_chance</code>        
 </td><td><em>simple</em
 > </td><td>0.1         </td><td>The probability with which to query extra 
 > nodes (e.g. more nodes than required by the consistency level) for the 
 > purpose of read 
 > repairs.</td></tr><tr><td><code>dclocal_read_repair_chance</code> 
 > </td><td><em>simple</em> </td><td>0           </td><td>The probability with 
 > which to query extra nodes (e.g. more nodes than required by the consistency 
 > level) belonging to the same data center than the read coordinator for the 
 > purpose of read repairs.</td></tr><tr><td><code>gc_grace_seconds</code>      
 >      </td><td><em>simple</em> </td><td>864000      </td><td>Time to wait 
 > before garbage collecting tombstones (deletion 
 > markers).</td></tr><tr><td><code>bloom_filter_fp_chance</code>     
 > </td><td><em>simple</em> </td><td>0.00075     </td><td>The target 
 > probability of false positive of the sstable bloom filters. Said bloom 
 > filters will be sized to provide the provided probability (thus lowering 
 > this value impact the size of bloom filters in-memory and on-disk)</
 td></tr><tr><td><code>compaction</code>                 </td><td><em>map</em>  
  </td><td><em>see below</em> </td><td>The compaction otpions to use, see 
below.</td></tr><tr><td><code>compression</code>                
</td><td><em>map</em>    </td><td><em>see below</em> </td><td>Compression 
options, see below. </td></tr><tr><td><code>replicate_on_write</code>         
</td><td><em>simple</em> </td><td>true        </td><td>Whether to replicate 
data on write. This can only be set to false for tables with counters values. 
Disabling this is dangerous and can result in random lose of counters, 
don&#8217;t disable unless you are sure to know what you are 
doing</td></tr><tr><td><code>caching</code>                    
</td><td><em>simple</em> </td><td>keys_only   </td><td>Whether to cache keys 
(&#8220;key cache&#8221;) and/or rows (&#8220;row cache&#8221;) for this table. 
Valid values are: <code>all</code>, <code>keys_only</code>, 
<code>rows_only</code> and <code>none</code>. </td></tr></tabl
 e><h4 id="compactionOptions"><code>compaction</code> options</h4><p>The 
<code>compaction</code> property must at least define the <code>'class'</code> 
sub-option, that defines the compaction strategy class to use. The default 
supported class are <code>'SizeTieredCompactionStrategy'</code> and 
<code>'LeveledCompactionStrategy'</code>. Custom strategy can be provided by 
specifying the full class name as a <a href="#constants">string constant</a>. 
The rest of the sub-options depends on the chosen class. The sub-options 
supported by the default classes are:</p><table><tr><th>option                  
      </th><th>supported compaction strategy </th><th>default 
</th><th>description </th></tr><tr><td><code>tombstone_threshold</code>         
  </td><td><em>all</em>                           </td><td>0.2       </td><td>A 
ratio such that if a sstable has more than this ratio of gcable tombstones over 
all contained columns, the sstable will be compacted (with no other sstables) 
for the purpose
  of purging those tombstones. 
</td></tr><tr><td><code>tombstone_compaction_interval</code> 
</td><td><em>all</em>                           </td><td>1 day     </td><td>The 
mininum time to wait after an sstable creation time before considering it for 
&#8220;tombstone compaction&#8221;, where &#8220;tombstone compaction&#8221; is 
the compaction triggered if the sstable has more gcable tombstones than 
<code>tombstone_threshold</code>. 
</td></tr><tr><td><code>min_sstable_size</code>              
</td><td>SizeTieredCompactionStrategy    </td><td>50MB      </td><td>The size 
tiered strategy groups SSTables to compact in buckets. A bucket groups SSTables 
that differs from less than 50% in size.  However, for small sizes, this would 
result in a bucketing that is too fine grained. <code>min_sstable_size</code> 
defines a size threshold (in bytes) below which all SSTables belong to one 
unique bucket</td></tr><tr><td><code>min_threshold</code>                 
</td><td>SizeTieredCompactionStrategy
     </td><td>4         </td><td>Minimum number of SSTables needed to start a 
minor compaction.</td></tr><tr><td><code>max_threshold</code>                 
</td><td>SizeTieredCompactionStrategy    </td><td>32        </td><td>Maximum 
number of SSTables processed by one minor 
compaction.</td></tr><tr><td><code>bucket_low</code>                    
</td><td>SizeTieredCompactionStrategy    </td><td>0.5       </td><td>Size 
tiered consider sstables to be within the same bucket if their size is within 
[average_size * <code>bucket_low</code>, average_size * 
<code>bucket_high</code> ] (i.e the default groups sstable whose sizes diverges 
by at most 50%)</td></tr><tr><td><code>bucket_high</code>                   
</td><td>SizeTieredCompactionStrategy    </td><td>1.5       </td><td>Size 
tiered consider sstables to be within the same bucket if their size is within 
[average_size * <code>bucket_low</code>, average_size * 
<code>bucket_high</code> ] (i.e the default groups sstable whose sizes diverges
  by at most 50%).</td></tr><tr><td><code>sstable_size_in_mb</code>            
</td><td>LeveledCompactionStrategy       </td><td>5MB       </td><td>The target 
size (in MB) for sstables in the leveled strategy. Note that while sstable 
sizes should stay less or equal to <code>sstable_size_in_mb</code>, it is 
possible to exceptionally have a larger sstable as during compaction, data for 
a given partition key are never split into 2 sstables</td></tr></table><p>For 
the <code>compression</code> property, the following default sub-options are 
available:</p><table><tr><th>option              </th><th>default        
</th><th>description </th></tr><tr><td><code>sstable_compression</code> 
</td><td>SnappyCompressor </td><td>The compression algorithm to use. Default 
compressor are: SnappyCompressor and DeflateCompressor. Use an empty string 
(<code>''</code>) to disable compression. Custom compressor can be provided by 
specifying the full class name as a <a href="#constants">string constant</a>.</
 td></tr><tr><td><code>chunk_length_kb</code>     </td><td>64KB             
</td><td>On disk SSTables are compressed by block (to allow random reads). This 
defines the size (in KB) of said block. Bigger values may improve the 
compression rate, but increases the minimum size of data to be read from disk 
for a read </td></tr><tr><td><code>crc_check_chance</code>    </td><td>1.0      
        </td><td>When compression is enabled, each compressed block includes a 
checksum of that block for the purpose of detecting disk bitrot and avoiding 
the propagation of corruption to other replica. This option defines the 
probability with which those checksums are checked during read. By default they 
are always checked. Set to 0 to disable checksum checking and to 0.5 for 
instance to check them every other read</td></tr></table><h4 
id="Otherconsiderations">Other considerations:</h4><ul><li>When <a 
href="#insertStmt/&quot;updating&quot;:#updateStmt">inserting</a> a given row, 
not all columns needs to b
 e defined (except for those part of the key), and missing columns occupy no 
space on disk. Furthermore, adding new columns (see &lt;a 
href=#alterStmt><tt>ALTER TABLE</tt></a>) is a constant time operation. There 
is thus no need to try to anticipate future usage (or to cry when you 
haven&#8217;t) when creating a table.</li></ul><h3 id="alterTableStmt">ALTER 
TABLE</h3><p><i>Syntax:</i></p><pre class="syntax"><pre>&lt;alter-table-stmt> 
::= ALTER (TABLE | COLUMNFAMILY) &lt;tablename> &lt;instruction>
 
 &lt;instruction> ::= ALTER &lt;identifier> TYPE &lt;type>
                 | ADD   &lt;identifier> &lt;type>
@@ -131,7 +128,7 @@ CREATE CUSTOM INDEX ON users (email) USI
 </pre></pre><p><br/><i>Sample:</i></p><pre class="sample"><pre>INSERT INTO 
NerdMovies (movie, director, main_actor, year)
                 VALUES ('Serenity', 'Joss Whedon', 'Nathan Fillion', 2005)
 USING TTL 86400;
-</pre></pre><p>The <code>INSERT</code> statement writes one or more columns 
for a given row in a table. Note that since a row is identified by its 
<code>PRIMARY KEY</code>, the columns that compose it must be specified. Also, 
since a row only exists when it contains one value for a column not part of the 
<code>PRIMARY KEY</code>, one such value must be specified too.</p><p>Note that 
unlike in SQL, <code>INSERT</code> does not check the prior existence of the 
row: the row is created if none existed before, and updated otherwise. 
Furthermore, there is no mean to know which of creation or update happened. In 
fact, the semantic of <code>INSERT</code> and <code>UPDATE</code> are 
identical.</p><p>All updates for an <code>INSERT</code> are applied atomically 
and in isolation.</p><p>Please refer to the <a 
href="#updateOptions"><code>UPDATE</code></a> section for information on the 
<code>&lt;option></code> available and to the <a 
href="#collections">collections</a> section for use of <code>&
 lt;collection-literal></code>. Also note that <code>INSERT</code> does not 
support counters, while <code>UPDATE</code> does.</p><h3 
id="updateStmt">UPDATE</h3><p><i>Syntax:</i></p><pre 
class="syntax"><pre>&lt;update-stmt> ::= UPDATE &lt;tablename>
+</pre></pre><p>The <code>INSERT</code> statement writes one or more columns 
for a given row in a table. Note that since a row is identified by its 
<code>PRIMARY KEY</code>, at least the columns composing it must be 
specified.</p><p>Note that unlike in SQL, <code>INSERT</code> does not check 
the prior existence of the row: the row is created if none existed before, and 
updated otherwise. Furthermore, there is no mean to know which of creation or 
update happened. In fact, the semantic of <code>INSERT</code> and 
<code>UPDATE</code> are identical.</p><p>All updates for an <code>INSERT</code> 
are applied atomically and in isolation.</p><p>Please refer to the <a 
href="#updateOptions"><code>UPDATE</code></a> section for information on the 
<code>&lt;option></code> available and to the <a 
href="#collections">collections</a> section for use of 
<code>&lt;collection-literal></code>. Also note that <code>INSERT</code> does 
not support counters, while <code>UPDATE</code> does.</p><h3 id="updateSt
 mt">UPDATE</h3><p><i>Syntax:</i></p><pre class="syntax"><pre>&lt;update-stmt> 
::= UPDATE &lt;tablename>
                   ( USING &lt;option> ( AND &lt;option> )* )?
                   SET &lt;assignment> ( ',' &lt;assignment> )*
                   WHERE &lt;where-clause>
@@ -155,7 +152,7 @@ SET director = 'Joss Whedon',
 WHERE movie = 'Serenity';
 
 UPDATE UserActions SET total = total + 2 WHERE user = 
B70DE1D0-9908-4AE3-BE34-5573E5B09F14 AND action = 'click';
-</pre></pre><p><br/>The <code>UPDATE</code> statement writes one or more 
columns for a given row in a table. The <code>&lt;where-clause></code> is used 
to select the row to update and must include all columns composing the 
<code>PRIMARY KEY</code> (the <code>IN</code> relation is only supported for 
the last column of the partition key). Other columns values are specified 
through <code>&lt;assignment></code> after the <code>SET</code> 
keyword.</p><p>Note that unlike in SQL, <code>UPDATE</code> does not check the 
prior existence of the row: the row is created if none existed before, and 
updated otherwise. Furthermore, there is no mean to know which of creation or 
update happened. In fact, the semantic of <code>INSERT</code> and 
<code>UPDATE</code> are identical.</p><p>In an <code>UPDATE</code> statement, 
all updates within the same partition key are applied atomically and in 
isolation.</p><p>The <code>c = c + 3</code> form of 
<code>&lt;assignment></code> is used to increment/decrement
  counters. The identifier after the &#8216;=&#8217; sign <strong>must</strong> 
be the same than the one before the &#8216;=&#8217; sign (Only 
increment/decrement is supported on counters, not the assignment of a specific 
value).</p><p>The <code>id = id + &lt;collection-literal></code> and 
<code>id[value1] = value2</code> forms of <code>&lt;assignment></code> are for 
collections. Please refer to the <a href="#collections">relevant section</a> 
for more details.</p><h4 
id="updateOptions"><code>&lt;options></code></h4><p>The <code>UPDATE</code> and 
<code>INSERT</code> statements allows to specify the following options for the 
insertion:</p><ul><li><code>TIMESTAMP</code>: sets the timestamp for the 
operation. If not specified, the current time of the insertion (in 
microseconds) is used. This is usually a suitable 
default.</li><li><code>TTL</code>: allows to specify an optional Time To Live 
(in seconds) for the inserted values. If set, the inserted values are 
automatically removed from th
 e database after the specified time. Note that the TTL concerns the inserted 
values, not the column themselves. This means that any subsequent update of the 
column will also reset the TTL (to whatever TTL is specified in that update). 
By default, values never expire.</li></ul><h3 
id="deleteStmt">DELETE</h3><p><i>Syntax:</i></p><pre 
class="syntax"><pre>&lt;delete-stmt> ::= DELETE ( &lt;selection> ( ',' 
&lt;selection> )* )?
+</pre></pre><p><br/>The <code>UPDATE</code> statement writes one or more 
columns for a given row in a table. The <code>&lt;where-clause></code> is used 
to select the row to update and must include all columns composing the 
<code>PRIMARY KEY</code> (the <code>IN</code> relation is only supported for 
the last column of the partition key). Other columns values are specified 
through <code>&lt;assignment></code> after the <code>SET</code> 
keyword.</p><p>Note that unlike in SQL, <code>UPDATE</code> does not check the 
prior existence of the row: the row is created if none existed before, and 
updated otherwise. Furthermore, there is no mean to know which of creation or 
update happened. In fact, the semantic of <code>INSERT</code> and 
<code>UPDATE</code> are identical.</p><p>In an <code>UPDATE</code> statement, 
all updates within the same partition key are applied atomically and in 
isolation.</p><p>The <code>c = c + 3</code> form of 
<code>&lt;assignment></code> is used to increment/decrement
  counters. The identifier after the &#8216;=&#8217; sign <strong>must</strong> 
be the same than the one before the &#8216;=&#8217; sign (Only 
increment/decrement is supported on counters, not the assignment of a specific 
value).</p><p>The <code>id = id + &lt;collection-literal></code> and 
<code>id[value1] = value2</code> forms of <code>&lt;assignment></code> are for 
collections. Please refer to the <a href="#collections">relevant section</a> 
for more details.</p><h4 
id="updateOptions"><code>&lt;options></code></h4><p>The <code>UPDATE</code> and 
<code>INSERT</code> statements allows to specify the following options for the 
insertion:</p><ul><li><code>TIMESTAMP</code>: sets the timestamp for the 
operation. If not specified, the current time of the insertion (in 
microseconds) is used. This is usually a suitable 
default.</li><li><code>TTL</code>: allows to specify an optional Time To Live 
(in seconds) for the inserted values. If set, the inserted values are 
automatically removed from th
 e database after the specified time. Note that the TTL concerns the inserted 
values, not the column themselves. This means that any subsequent update of the 
column will also reset the TTL (to whatever TTL is specified in that update). 
By default, values never expire. A TTL of 0 or a negative one is equivalent to 
no TTL.</li></ul><h3 id="deleteStmt">DELETE</h3><p><i>Syntax:</i></p><pre 
class="syntax"><pre>&lt;delete-stmt> ::= DELETE ( &lt;selection> ( ',' 
&lt;selection> )* )?
                   FROM &lt;tablename>
                   ( USING TIMESTAMP &lt;integer>)?
                   WHERE &lt;where-clause>

Modified: cassandra/site/publish/doc/cql3/CQL-2.0.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/doc/cql3/CQL-2.0.html?rev=1576002&r1=1576001&r2=1576002&view=diff
==============================================================================
--- cassandra/site/publish/doc/cql3/CQL-2.0.html (original)
+++ cassandra/site/publish/doc/cql3/CQL-2.0.html Mon Mar 10 17:03:32 2014
@@ -60,9 +60,6 @@ CREATE KEYSPACE Excalibur
 &lt;column-definition> ::= &lt;identifier> &lt;type> ( STATIC )? ( PRIMARY KEY 
)?
                       | PRIMARY KEY '(' &lt;partition-key> ( ',' 
&lt;identifier> )* ')'
 
-&lt;partition-key> ::= &lt;partition-key>
-                  | '(' &lt;partition-key> ( ',' &lt;identifier> )* ')'
-
 &lt;partition-key> ::= &lt;identifier>
                   | '(' &lt;identifier> (',' &lt;identifier> )* ')'
 


Reply via email to