Yingyi Bu has uploaded a new change for review.

  https://asterix-gerrit.ics.uci.edu/1295

Change subject: Documentation cleanup.
......................................................................

Documentation cleanup.

1. "record"->"object",
2. JSONify sqlpp/3_query.md.

Change-Id: Idcb2be81d1bfa37dd876cd36a7a5bb824bc3ab86
---
M asterixdb/asterix-doc/src/main/markdown/builtins/0_toc.md
M asterixdb/asterix-doc/src/main/markdown/builtins/11_type.md
M asterixdb/asterix-doc/src/main/markdown/builtins/12_misc.md
M asterixdb/asterix-doc/src/main/markdown/builtins/8_record.md
M asterixdb/asterix-doc/src/main/markdown/sqlpp/2_expr.md
M asterixdb/asterix-doc/src/main/markdown/sqlpp/3_query.md
M asterixdb/asterix-doc/src/main/markdown/sqlpp/5_ddl.md
M asterixdb/asterix-doc/src/site/markdown/aql/externaldata.md
M asterixdb/asterix-doc/src/site/markdown/aql/filters.md
M asterixdb/asterix-doc/src/site/markdown/aql/manual.md
M asterixdb/asterix-doc/src/site/markdown/aql/primer.md
M asterixdb/asterix-doc/src/site/markdown/aql/similarity.md
M asterixdb/asterix-doc/src/site/markdown/csv.md
M asterixdb/asterix-doc/src/site/markdown/datamodel.md
M asterixdb/asterix-doc/src/site/markdown/feeds/tutorial.md
M asterixdb/asterix-doc/src/site/markdown/sqlpp/primer-sqlpp.md
M asterixdb/asterix-doc/src/site/markdown/udf.md
17 files changed, 244 insertions(+), 245 deletions(-)


  git pull ssh://asterix-gerrit.ics.uci.edu:29418/asterixdb 
refs/changes/95/1295/1

diff --git a/asterixdb/asterix-doc/src/main/markdown/builtins/0_toc.md 
b/asterixdb/asterix-doc/src/main/markdown/builtins/0_toc.md
index 2cab02c..911ee63 100644
--- a/asterixdb/asterix-doc/src/main/markdown/builtins/0_toc.md
+++ b/asterixdb/asterix-doc/src/main/markdown/builtins/0_toc.md
@@ -28,7 +28,7 @@
 * [Similarity Functions](#SimilarityFunctions)
 * [Tokenizing Functions](#TokenizingFunctions)
 * [Temporal Functions](#TemporalFunctions)
-* [Record Functions](#RecordFunctions)
+* [Object Functions](#ObjectFunctions)
 * [Aggregate Functions (Array Functions)](#AggregateFunctions)
 * [Comparison Functions](#ComparisonFunctions)
 * [Type Functions](#TypeFunctions)
diff --git a/asterixdb/asterix-doc/src/main/markdown/builtins/11_type.md 
b/asterixdb/asterix-doc/src/main/markdown/builtins/11_type.md
index 7d355b2..7e8a7fe 100644
--- a/asterixdb/asterix-doc/src/main/markdown/builtins/11_type.md
+++ b/asterixdb/asterix-doc/src/main/markdown/builtins/11_type.md
@@ -129,11 +129,11 @@
 
         is_object(expr)
 
- * Checks whether the given expression is evaluated to be a `record` value.
+ * Checks whether the given expression is evaluated to be a `object` value.
  * Arguments:
     * `expr` : an expression (any type is allowed).
  * Return Value:
-    * a `boolean` on whether the argument is a `record` value or not,
+    * a `boolean` on whether the argument is a `object` value or not,
     * a `missing` if the argument is a `missing` value,
     * a `null` if the argument is a `null` value.
 
diff --git a/asterixdb/asterix-doc/src/main/markdown/builtins/12_misc.md 
b/asterixdb/asterix-doc/src/main/markdown/builtins/12_misc.md
index ee5ca31..b28443c 100644
--- a/asterixdb/asterix-doc/src/main/markdown/builtins/12_misc.md
+++ b/asterixdb/asterix-doc/src/main/markdown/builtins/12_misc.md
@@ -151,7 +151,7 @@
         deep_equal(expr1, expr2)
 
 
- * Assess the equality between two expressions of any type (e.g., record, 
arrays, or multiset).
+ * Assess the equality between two expressions of any type (e.g., object, 
arrays, or multiset).
  Two objects are deeply equal iff both their types and values are equal.
  * Arguments:
     * `expr1` : an expression,
diff --git a/asterixdb/asterix-doc/src/main/markdown/builtins/8_record.md 
b/asterixdb/asterix-doc/src/main/markdown/builtins/8_record.md
index a110433..d7ec35b 100644
--- a/asterixdb/asterix-doc/src/main/markdown/builtins/8_record.md
+++ b/asterixdb/asterix-doc/src/main/markdown/builtins/8_record.md
@@ -17,27 +17,27 @@
  ! under the License.
  !-->
 
-## <a id="RecordFunctions">Record Functions</a> ##
+## <a id="ObjectFunctions">Object Functions</a> ##
 
-### get_record_fields ###
+### get_object_fields ###
  * Syntax:
 
-        get_record_fields(input_record)
+        get_object_fields(input_object)
 
- * Access the record field names, type and open status for a given record.
+ * Access the object field names, type and open status for a given object.
  * Arguments:
-    * `input_record` : a record value.
+    * `input_object` : a object value.
  * Return Value:
-    * an array of `record` values that include the field_name `string`,
+    * an array of `object` values that include the field_name `string`,
       field_type `string`, is_open `boolean` (used for debug purposes only: 
`true` if field is open and `false` otherwise),
-      and optional nested `orderedList` for the values of a nested record,
+      and optional nested `orderedList` for the values of a nested object,
     * `missing` if the argument is a `missing` value,
     * `null` if the argument is a `null` value,
-    * any other non-record input value will cause a type error.
+    * any other non-object input value will cause a type error.
 
  * Example:
 
-        get_record_fields(
+        get_object_fields(
                           {
                             "id": 1,
                             "project": "AsterixDB",
@@ -70,26 +70,26 @@
         ]
 
  ]
-### get_record_field_value ###
+### get_object_field_value ###
  * Syntax:
 
-        get_record_field_value(input_record, string)
+        get_object_field_value(input_object, string)
 
- * Access the field name given in the `string_expression` from the 
`record_expression`.
+ * Access the field name given in the `string_expression` from the 
`object_expression`.
  * Arguments:
-    * `input_record` : a `record` value.
+    * `input_object` : a `object` value.
     * `string` : a `string` representing the top level field name.
  * Return Value:
-    * an `any` value saved in the designated field of the record,
+    * an `any` value saved in the designated field of the object,
     * `missing` if any argument is a `missing` value,
     * `null` if any argument is a `null` value but no argument is a `missing` 
value,
     * a type error will be raised if:
-        * the first argument is any other non-record value,
+        * the first argument is any other non-object value,
         * or, the second argument is any other non-string value.
 
  * Example:
 
-        get_record_field_value({
+        get_object_field_value({
                                  "id": 1,
                                  "project": "AsterixDB",
                                  "address": {"city": "Irvine", "state": "CA"},
@@ -102,28 +102,28 @@
 
         "AsterixDB"
 
-### record_remove_fields ###
+### object_remove_fields ###
  * Syntax:
 
-        record_remove_fields(input_record, field_names)
+        object_remove_fields(input_object, field_names)
 
- * Remove indicated fields from a record given a list of field names.
+ * Remove indicated fields from a object given a list of field names.
  * Arguments:
-    * `input_record`:  a record value.
+    * `input_object`:  a object value.
     * `field_names`: an array of strings and/or array of array of strings.
 
  * Return Value:
-    * a new record value without the fields listed in the second argument,
+    * a new object value without the fields listed in the second argument,
     * `missing` if any argument is a `missing` value,
     * `null` if any argument is a `null` value but no argument is a `missing` 
value,
     * a type error will be raised if:
-        * the first argument is any other non-record value,
+        * the first argument is any other non-object value,
         * or, the second argument is any other non-array value or recursively 
contains non-string items.
 
 
  * Example:
 
-        record_remove_fields(
+        object_remove_fields(
                                {
                                  "id":1,
                                  "project":"AsterixDB",
@@ -141,27 +141,27 @@
           "address":{ "state": "CA" }
         }
 
-### record_add_fields ###
+### object_add_fields ###
  * Syntax:
 
-        record_add_fields(input_record, fields)
+        object_add_fields(input_object, fields)
 
- * Add fields to a record given a list of field names.
+ * Add fields to a object given a list of field names.
  * Arguments:
-    * `input_record` : a record value.
-    * `fields`: an array of field descriptor records where each record has 
field_name and  field_value.
+    * `input_object` : a object value.
+    * `fields`: an array of field descriptor objects where each object has 
field_name and  field_value.
  * Return Value:
-    * a new record value with the new fields included,
+    * a new object value with the new fields included,
     * `missing` if any argument is a `missing` value,
     * `null` if any argument is a `null` value but no argument is a `missing` 
value,
     * a type error will be raised if:
-        * the first argument is any other non-record value,
-        * the second argument is any other non-array value, or contains 
non-record items.
+        * the first argument is any other non-object value,
+        * the second argument is any other non-array value, or contains 
non-object items.
 
 
  * Example:
 
-        record_add_fields(
+        object_add_fields(
                            {
                              "id":1,
                              "project":"AsterixDB",
@@ -181,26 +181,26 @@
            "employment_location": point("30.0,70.0")
          }
 
-### record_merge ###
+### object_merge ###
  * Syntax:
 
-        record_merge(record1, record2)
+        object_merge(object1, object2)
 
- * Merge two different records into a new record.
+ * Merge two different objects into a new object.
  * Arguments:
-    * `record1` : a record value.
-    * `record2` : a record value.
+    * `object1` : a object value.
+    * `object2` : a object value.
  * Return Value:
-    * a new record value with fields from both input records. If a field’s 
names in both records are the same,
+    * a new object value with fields from both input objects. If a field’s 
names in both objects are the same,
       an exception is issued,
     * `missing` if any argument is a `missing` value,
     * `null` if any argument is a `null` value but no argument is a `missing` 
value,
-    * any other non-record input value will cause a type error.
+    * any other non-object input value will cause a type error.
 
 
  * Example:
 
-        record_merge(
+        object_merge(
                       {
                         "id":1,
                         "project":"AsterixDB",
diff --git a/asterixdb/asterix-doc/src/main/markdown/sqlpp/2_expr.md 
b/asterixdb/asterix-doc/src/main/markdown/sqlpp/2_expr.md
index 811094e..17cf9bf 100644
--- a/asterixdb/asterix-doc/src/main/markdown/sqlpp/2_expr.md
+++ b/asterixdb/asterix-doc/src/main/markdown/sqlpp/2_expr.md
@@ -33,7 +33,7 @@
 
 The most basic building block for any SQL++ expression is PrimaryExpression. 
This can be a simple literal (constant)
 value, a reference to a query variable that is in scope, a parenthesized 
expression, a function call, or a newly
-constructed instance of the data model (such as a newly constructed record, 
array, or multiset of data model instances).
+constructed instance of the data model (such as a newly constructed object, 
array, or multiset of data model instances).
 
 ### <a id="Literals">Literals</a>
 
@@ -58,7 +58,7 @@
                      | <DIGITS> ( "." <DIGITS> )?
                      | "." <DIGITS>
 
-Literals (constants) in SQL++ can be strings, integers, floating point values, 
double values, boolean constants, or special constant values like `NULL` and 
`MISSING`. The `NULL` value is like a `NULL` in SQL; it is used to represent an 
unknown field value. The specialy value `MISSING` is only meaningful in the 
context of SQL++ field accesses; it occurs when the accessed field simply does 
not exist at all in a record being accessed.
+Literals (constants) in SQL++ can be strings, integers, floating point values, 
double values, boolean constants, or special constant values like `NULL` and 
`MISSING`. The `NULL` value is like a `NULL` in SQL; it is used to represent an 
unknown field value. The specialy value `MISSING` is only meaningful in the 
context of SQL++ field accesses; it occurs when the accessed field simply does 
not exist at all in a object being accessed.
 
 The following are some simple examples of SQL++ literals.
 
@@ -115,20 +115,20 @@
     CollectionConstructor    ::= ArrayConstructor | MultisetConstructor
     ArrayConstructor         ::= "[" ( Expression ( "," Expression )* )? "]"
     MultisetConstructor      ::= "{{" ( Expression ( "," Expression )* )? "}}"
-    RecordConstructor        ::= "{" ( FieldBinding ( "," FieldBinding )* )? 
"}"
+    ObjectConstructor        ::= "{" ( FieldBinding ( "," FieldBinding )* )? 
"}"
     FieldBinding             ::= Expression ":" Expression
 
 A major feature of SQL++ is its ability to construct new data model instances. 
This is accomplished using its constructors
-for each of the model's complex object structures, namely arrays, multisets, 
and records.
+for each of the model's complex object structures, namely arrays, multisets, 
and objects.
 Arrays are like JSON arrays, while multisets have bag semantics.
-Records are built from fields that are field-name/field-value pairs, again 
like JSON.
+Objects are built from fields that are field-name/field-value pairs, again 
like JSON.
 (See the [data model document](../datamodel.html) for more details on each.)
 
-The following examples illustrate how to construct a new array with 3 items, a 
new record with 2 fields,
+The following examples illustrate how to construct a new array with 3 items, a 
new object with 2 fields,
 and a new multiset with 4 items, respectively. Array elements or multiset 
elements can be homogeneous (as in
 the first example),
 which is the common case, or they may be heterogeneous (as in the third 
example). The data values and field name values
-used to construct arrays, multisets, and records in constructors are all 
simply SQL++ expressions. Thus, the collection elements,
+used to construct arrays, multisets, and objects in constructors are all 
simply SQL++ expressions. Thus, the collection elements,
 field names, and field values used in constructors can be simple literals or 
they can come from query variable references
 or even arbitrarily complex SQL++ expressions (subqueries).
 
@@ -150,12 +150,12 @@
     Index           ::= "[" ( Expression | "?" ) "]"
 
 Components of complex types in the data model are accessed via path 
expressions. Path access can be applied to the result
-of a SQL++ expression that yields an instance of  a complex type, e.g., a 
record or array instance. For records,
+of a SQL++ expression that yields an instance of  a complex type, e.g., a 
object or array instance. For objects,
 path access is based on field names. For arrays, path access is based on 
(zero-based) array-style indexing.
 SQL++ also supports an "I'm feeling lucky" style index accessor, [?], for 
selecting an arbitrary element from an array.
  Attempts to access non-existent fields or out-of-bound array elements produce 
the special value `MISSING`.
 
-The following examples illustrate field access for a record, index-based 
element access for an array, and also a
+The following examples illustrate field access for a object, index-based 
element access for an array, and also a
 composition thereof.
 
 ##### Examples
@@ -220,7 +220,7 @@
 | NOT EXISTS |  Check whether a collection is empty         | SELECT * FROM 
ChirpMessages cm <br/>WHERE NOT EXISTS cm.referredTopics; |
 
 ### <a id="Comparison_operators">Comparison operators</a>
-Comparison operators are used to compare values. The comparison operators fall 
into one of two sub-categories: missing value comparisons and regular value 
comparisons. SQL++ (and JSON) has two ways of representing missing information 
in a record - the presence of the field with a NULL for its value (as in SQL), 
and the absence of the field (which JSON permits). For example, the first of 
the following records represents Jack, whose friend is Jill. In the other 
examples, Jake is friendless a la SQL, with a friend field that is NULL, while 
Joe is friendless in a more natural (for JSON) way, i.e., by not having a 
friend field.
+Comparison operators are used to compare values. The comparison operators fall 
into one of two sub-categories: missing value comparisons and regular value 
comparisons. SQL++ (and JSON) has two ways of representing missing information 
in a object - the presence of the field with a NULL for its value (as in SQL), 
and the absence of the field (which JSON permits). For example, the first of 
the following objects represents Jack, whose friend is Jill. In the other 
examples, Jake is friendless a la SQL, with a friend field that is NULL, while 
Joe is friendless in a more natural (for JSON) way, i.e., by not having a 
friend field.
 
 ##### Examples
 {"name": "Jack", "friend": "Jill"}
diff --git a/asterixdb/asterix-doc/src/main/markdown/sqlpp/3_query.md 
b/asterixdb/asterix-doc/src/main/markdown/sqlpp/3_query.md
index 5ca0e1f..9ccf619 100644
--- a/asterixdb/asterix-doc/src/main/markdown/sqlpp/3_query.md
+++ b/asterixdb/asterix-doc/src/main/markdown/sqlpp/3_query.md
@@ -72,23 +72,23 @@
     OrderbyClause      ::= <ORDER> <BY> Expression ( <ASC> | <DESC> )? ( "," 
Expression ( <ASC> | <DESC> )? )*
     LimitClause        ::= <LIMIT> Expression ( <OFFSET> Expression )?
 
-In this section, we will make use of two stored collections of records 
(datasets), `GleambookUsers` and `GleambookMessages`, in a series of running 
examples to explain `SELECT` queries. The contents of the example collections 
are as follows:
+In this section, we will make use of two stored collections of objects 
(datasets), `GleambookUsers` and `GleambookMessages`, in a series of running 
examples to explain `SELECT` queries. The contents of the example collections 
are as follows:
 
 `GleambookUsers` collection:
 
-    
{"id":1,"alias":"Margarita","name":"MargaritaStoddard","nickname":"Mags","userSince":datetime("2012-08-20T10:10:00"),"friendIds":{{2,3,6,10}},"employment":[{"organizationName":"Codetechno","start-date":date("2006-08-06")},{"organizationName":"geomedia","start-date":date("2010-06-17"),"end-date":date("2010-01-26")}],"gender":"F"}
-    
{"id":2,"alias":"Isbel","name":"IsbelDull","nickname":"Izzy","userSince":datetime("2011-01-22T10:10:00"),"friendIds":{{1,4}},"employment":[{"organizationName":"Hexviafind","startDate":date("2010-04-27")}]}
-    
{"id":3,"alias":"Emory","name":"EmoryUnk","userSince":datetime("2012-07-10T10:10:00"),"friendIds":{{1,5,8,9}},"employment":[{"organizationName":"geomedia","startDate":date("2010-06-17"),"endDate":date("2010-01-26")}]}
+    
{"id":1,"alias":"Margarita","name":"MargaritaStoddard","nickname":"Mags","userSince":"2012-08-20T10:10:00","friendIds":[2,3,6,10],"employment":[{"organizationName":"Codetechno","start-date":"2006-08-06"},{"organizationName":"geomedia","start-date":"2010-06-17","end-date":"2010-01-26"}],"gender":"F"}
+    
{"id":2,"alias":"Isbel","name":"IsbelDull","nickname":"Izzy","userSince":"2011-01-22T10:10:00","friendIds":[1,4],"employment":[{"organizationName":"Hexviafind","startDate":"2010-04-27"}]}
+    
{"id":3,"alias":"Emory","name":"EmoryUnk","userSince":"2012-07-10T10:10:00","friendIds":[1,5,8,9],"employment":[{"organizationName":"geomedia","startDate":"2010-06-17","endDate":"2010-01-26"}]}
 
 `GleambookMessages` collection:
 
-    
{"messageId":2,"authorId":1,"inResponseTo":4,"senderLocation":point("41.66,80.87"),"message":"
 dislike iphone its touch-screen is horrible"}
-    
{"messageId":3,"authorId":2,"inResponseTo":4,"senderLocation":point("48.09,81.01"),"message":"
 like samsung the plan is amazing"}
-    
{"messageId":4,"authorId":1,"inResponseTo":2,"senderLocation":point("37.73,97.04"),"message":"
 can't stand at&t the network is horrible:("}
-    
{"messageId":6,"authorId":2,"inResponseTo":1,"senderLocation":point("31.5,75.56"),"message":"
 like t-mobile its platform is mind-blowing"}
-    
{"messageId":8,"authorId":1,"inResponseTo":11,"senderLocation":point("40.33,80.87"),"message":"
 like verizon the 3G is awesome:)"}
-    
{"messageId":10,"authorId":1,"inResponseTo":12,"senderLocation":point("42.5,70.01"),"message":"
 can't stand motorola the touch-screen is terrible"}
-    
{"messageId":11,"authorId":1,"inResponseTo":1,"senderLocation":point("38.97,77.49"),"message":"
 can't stand at&t its plan is terrible"}
+    
{"messageId":2,"authorId":1,"inResponseTo":4,"senderLocation":[41.66,80.87],"message":"
 dislike iphone its touch-screen is horrible"}
+    
{"messageId":3,"authorId":2,"inResponseTo":4,"senderLocation":[48.09,81.01],"message":"
 like samsung the plan is amazing"}
+    
{"messageId":4,"authorId":1,"inResponseTo":2,"senderLocation":[37.73,97.04],"message":"
 can't stand at&t the network is horrible:("}
+    
{"messageId":6,"authorId":2,"inResponseTo":1,"senderLocation":[31.5,75.56],"message":"
 like t-mobile its platform is mind-blowing"}
+    
{"messageId":8,"authorId":1,"inResponseTo":11,"senderLocation":[40.33,80.87],"message":"
 like verizon the 3G is awesome:)"}
+    
{"messageId":10,"authorId":1,"inResponseTo":12,"senderLocation":[42.5,70.01],"message":"
 can't stand motorola the touch-screen is terrible"}
+    
{"messageId":11,"authorId":1,"inResponseTo":1,"senderLocation":[38.97,77.49],"message":"
 can't stand at&t its plan is terrible"}
 
 ## <a id="Select_clauses">SELECT Clause</a>
 The SQL++ `SELECT` clause always returns a collection value as its result 
(even if the result is empty or a singleton).
@@ -150,7 +150,7 @@
     } ]
 
 ### <a id="Select_star">SELECT *</a>
-In SQL++, `SELECT *` returns a record with a nested field for each input 
tuple. Each field has as its field name the name of a binding variable 
generated by either the `FROM` clause or `GROUP BY` clause in the current 
enclosing `SELECT` statement, and its field is the value of that binding 
variable.
+In SQL++, `SELECT *` returns a object with a nested field for each input 
tuple. Each field has as its field name the name of a binding variable 
generated by either the `FROM` clause or `GROUP BY` clause in the current 
enclosing `SELECT` statement, and its field is the value of that binding 
variable.
 
 ##### Example
 
@@ -296,7 +296,7 @@
 For each of its input tuples, the `UNNEST` clause flattens a collection-valued 
expression into individual items, producing multiple tuples, each of which is 
one of the expression's original input tuples augmented with a flattened item 
from its collection.
 
 ### <a id="Inner_unnests">Inner UNNEST</a>
-The following example is a query that retrieves the names of the organizations 
that a selected user has worked for. It uses the `UNNEST` clause to unnest the 
nested collection `employment` in the user's record.
+The following example is a query that retrieves the names of the organizations 
that a selected user has worked for. It uses the `UNNEST` clause to unnest the 
nested collection `employment` in the user's object.
 
 ##### Example
 
@@ -318,7 +318,7 @@
 Note that `UNNEST` has SQL's inner join semantics --- that is, if a user has 
no employment history, no tuple corresponding to that user will be emitted in 
the result.
 
 ### <a id="Left_outer_unnests">Left outer UNNEST</a>
-As an alternative, the `LEFT OUTER UNNEST` clause offers SQL's left outer join 
semantics. For example, no collection-valued field named `hobbies` exists in 
the record for the user whose id is 1, but the following query's result still 
includes user 1.
+As an alternative, the `LEFT OUTER UNNEST` clause offers SQL's left outer join 
semantics. For example, no collection-valued field named `hobbies` exists in 
the object for the user whose id is 1, but the following query's result still 
includes user 1.
 
 ##### Example
 
@@ -337,7 +337,7 @@
 
 ### <a id="Expressing_joins_using_unnests">Expressing joins using UNNEST</a>
 The SQL++ `UNNEST` clause is similar to SQL's `JOIN` clause except that it 
allows its right argument to be correlated to its left argument, as in the 
examples above --- i.e., think "correlated cross-product".
-The next example shows this via a query that joins two data sets, 
GleambookUsers and GleambookMessages, returning user/message pairs. The results 
contain one record per pair, with result records containing the user's name and 
an entire message. The query can be thought of as saying "for each Gleambook 
user, unnest the `GleambookMessages` collection and filter the output with the 
condition `message.authorId = user.id`".
+The next example shows this via a query that joins two data sets, 
GleambookUsers and GleambookMessages, returning user/message pairs. The results 
contain one object per pair, with result objects containing the user's name and 
an entire message. The query can be thought of as saying "for each Gleambook 
user, unnest the `GleambookMessages` collection and filter the output with the 
condition `message.authorId = user.id`".
 
 ##### Example
 
@@ -532,7 +532,7 @@
        "uname": "EmoryUnk"
     } ]
 
-For non-matching left-side tuples, SQL++ produces `MISSING` values for the 
right-side binding variables; that is why the last record in the above result 
doesn't have a `message` field. Note that this is slightly different from 
standard SQL, which instead would fill in `NULL` values for the right-side 
fields. The reason for this difference is that, for non-matches in its join 
results, SQL++ views fields from the right-side as being "not there" (a.k.a. 
`MISSING`) instead of as being "there but unknown" (i.e., `NULL`).
+For non-matching left-side tuples, SQL++ produces `MISSING` values for the 
right-side binding variables; that is why the last object in the above result 
doesn't have a `message` field. Note that this is slightly different from 
standard SQL, which instead would fill in `NULL` values for the right-side 
fields. The reason for this difference is that, for non-matches in its join 
results, SQL++ views fields from the right-side as being "not there" (a.k.a. 
`MISSING`) instead of as being "there but unknown" (i.e., `NULL`).
 
 The left-outer join query can also be expressed using `LEFT OUTER UNNEST`:
 
@@ -551,7 +551,7 @@
 
 ### <a id="Group_variables">Group variables</a>
 In a `GROUP BY` clause, in addition to the binding variable(s) defined for the 
grouping key(s), SQL++ allows a user to define a *group variable* by using the 
clause's `GROUP AS` extension to denote the resulting group.
-After grouping, then, the query's in-scope variables include the grouping 
key's binding variables as well as this group variable which will be bound to 
one collection value for each group. This per-group collection value will be a 
set of nested records in which each field of the record is the result of a 
renamed variable defined in parentheses following the group variable's name. 
The `GROUP AS` syntax is as follows:
+After grouping, then, the query's in-scope variables include the grouping 
key's binding variables as well as this group variable which will be bound to 
one collection value for each group. This per-group collection value will be a 
set of nested objects in which each field of the object is the result of a 
renamed variable defined in parentheses following the group variable's name. 
The `GROUP AS` syntax is as follows:
 
     <GROUP> <AS> Variable ("(" Variable <AS> VariableReference ("," Variable 
<AS> VariableReference )* ")")?
 
@@ -659,13 +659,13 @@
 
 As we can see from the above query result, each group in the example query's 
output has an associated group
 variable value called `msgs` that appears in the `SELECT *`'s result.
-This variable contains a collection of records associated with the group; each 
of the group's `message` values
-appears in the `msg` field of the records in the `msgs` collection.
+This variable contains a collection of objects associated with the group; each 
of the group's `message` values
+appears in the `msg` field of the objects in the `msgs` collection.
 
 The group variable in SQL++ makes more complex, composable, nested subqueries 
over a group possible, which is
 important given the more complex data model of SQL++ (relative to SQL).
 As a simple example of this, as we really just want the messages associated 
with each user, we might wish to avoid
-the "extra wrapping" of each message as the `msg` field of a record.
+the "extra wrapping" of each message as the `msg` field of a object.
 (That wrapping is useful in more complex cases, but is essentially just in the 
way here.)
 We can use a subquery in the `SELECT` clase to tunnel through the extra 
nesting and produce the desired result.
 
@@ -1460,7 +1460,7 @@
 
 | Feature |  SQL++ | SQL-92 |
 |----------|--------|--------|
-| SELECT * | Returns nested records | Returns flattened concatenated records |
+| SELECT * | Returns nested objects | Returns flattened concatenated objects |
 | Subquery | Returns a collection  | The returned collection is cast into a 
scalar value if the subquery appears in a SELECT list or on one side of a 
comparison or as input to a function |
 | LEFT OUTER JOIN |  Fills in `MISSING`(s) for non-matches  |   Fills in 
`NULL`(s) for non-matches    |
 | UNION ALL       | Allows heterogeneous inputs and output | Input streams 
must be UNION-compatible and output field names are drawn from the first input 
stream
@@ -1475,5 +1475,5 @@
   * Schema-free: The query language does not assume the existence of a static 
schema for any data that it processes.
   * Correlated FROM terms: A right-side FROM term expression can refer to 
variables defined by FROM terms on its left.
   * Powerful GROUP BY: In addition to a set of aggregate functions as in 
standard SQL, the groups created by the `GROUP BY` clause are directly usable 
in nested queries and/or to obtain nested results.
-  * Generalized SELECT clause: A SELECT clause can return any type of 
collection, while in SQL-92, a `SELECT` clause has to return a (homogeneous) 
collection of records.
+  * Generalized SELECT clause: A SELECT clause can return any type of 
collection, while in SQL-92, a `SELECT` clause has to return a (homogeneous) 
collection of objects.
 
diff --git a/asterixdb/asterix-doc/src/main/markdown/sqlpp/5_ddl.md 
b/asterixdb/asterix-doc/src/main/markdown/sqlpp/5_ddl.md
index d236003..b6577ff 100644
--- a/asterixdb/asterix-doc/src/main/markdown/sqlpp/5_ddl.md
+++ b/asterixdb/asterix-doc/src/main/markdown/sqlpp/5_ddl.md
@@ -105,12 +105,12 @@
 
 ### <a id="Types"> Types</a>
 
-    TypeSpecification    ::= "TYPE" FunctionOrTypeName IfNotExists "AS" 
RecordTypeDef
+    TypeSpecification    ::= "TYPE" FunctionOrTypeName IfNotExists "AS" 
ObjectTypeDef
     FunctionOrTypeName   ::= QualifiedName
     IfNotExists          ::= ( <IF> <NOT> <EXISTS> )?
-    TypeExpr             ::= RecordTypeDef | TypeReference | ArrayTypeDef | 
MultisetTypeDef
-    RecordTypeDef        ::= ( <CLOSED> | <OPEN> )? "{" ( RecordField ( "," 
RecordField )* )? "}"
-    RecordField          ::= Identifier ":" ( TypeExpr ) ( "?" )?
+    TypeExpr             ::= ObjectTypeDef | TypeReference | ArrayTypeDef | 
MultisetTypeDef
+    ObjectTypeDef        ::= ( <CLOSED> | <OPEN> )? "{" ( ObjectField ( "," 
ObjectField )* )? "}"
+    ObjectField          ::= Identifier ":" ( TypeExpr ) ( "?" )?
     NestedField          ::= Identifier ( "." Identifier )*
     IndexField           ::= NestedField ( ":" TypeReference )?
     TypeReference        ::= Identifier
@@ -120,17 +120,17 @@
 The CREATE TYPE statement is used to create a new named datatype.
 This type can then be used to create stored collections or utilized when 
defining one or more other datatypes.
 Much more information about the data model is available in the [data model 
reference guide](datamodel.html).
-A new type can be a record type, a renaming of another type, an array type, or 
a multiset type.
-A record type can be defined as being either open or closed.
-Instances of a closed record type are not permitted to contain fields other 
than those specified in the create type statement.
-Instances of an open record type may carry additional fields, and open is the 
default for new types if neither option is specified.
+A new type can be a object type, a renaming of another type, an array type, or 
a multiset type.
+A object type can be defined as being either open or closed.
+Instances of a closed object type are not permitted to contain fields other 
than those specified in the create type statement.
+Instances of an open object type may carry additional fields, and open is the 
default for new types if neither option is specified.
 
-The following example creates a new record type called GleambookUser type.
+The following example creates a new object type called GleambookUser type.
 Since it is defined as (defaulting to) being an open type,
 instances will be permitted to contain more than what is specified in the type 
definition.
 The first four fields are essentially traditional typed name/value pairs (much 
like SQL fields).
 The friendIds field is a multiset of integers.
-The employment field is an array of instances of another named record type, 
EmploymentType.
+The employment field is an array of instances of another named object type, 
EmploymentType.
 
 ##### Example
 
@@ -143,7 +143,7 @@
       employment: [ EmploymentType ]
     };
 
-The next example creates a new record type, closed this time, called 
MyUserTupleType.
+The next example creates a new object type, closed this time, called 
MyUserTupleType.
 Instances of this closed type will not be permitted to have extra fields,
 although the alias field is marked as optional and may thus be NULL or MISSING 
in legal instances of the type.
 Note that the type of the id field in the example is UUID.
@@ -178,16 +178,16 @@
     CompactionPolicy     ::= Identifier
 
 The CREATE DATASET statement is used to create a new dataset.
-Datasets are named, multisets of record type instances;
+Datasets are named, multisets of object type instances;
 they are where data lives persistently and are the usual targets for SQL++ 
queries.
 Datasets are typed, and the system ensures that their contents conform to 
their type definitions.
 An Internal dataset (the default kind) is a dataset whose content lives within 
and is managed by the system.
-It is required to have a specified unique primary key field which uniquely 
identifies the contained records.
-(The primary key is also used in secondary indexes to identify the indexed 
primary data records.)
+It is required to have a specified unique primary key field which uniquely 
identifies the contained objects.
+(The primary key is also used in secondary indexes to identify the indexed 
primary data objects.)
 
 Internal datasets contain several advanced options that can be specified when 
appropriate.
 One such option is that random primary key (UUID) values can be auto-generated 
by declaring the field to be UUID and putting "AUTOGENERATED" after the 
"PRIMARY KEY" identifier.
-In this case, unlike other non-optional fields, a value for the auto-generated 
PK field should not be provided at insertion time by the user since each 
record's primary key field value will be auto-generated by the system.
+In this case, unlike other non-optional fields, a value for the auto-generated 
PK field should not be provided at insertion time by the user since each 
object's primary key field value will be auto-generated by the system.
 
 Another advanced option, when creating an Internal dataset, is to specify the 
merge policy to control which of the
 underlying LSM storage components to be merged.
@@ -214,17 +214,17 @@
 When defining an External dataset, an appropriate adapter type must be 
selected for the desired external data.
 (See the [Guide to External Data](externaldata.html) for more information on 
the available adapters.)
 
-The following example creates an Internal dataset for storing FacefookUserType 
records.
+The following example creates an Internal dataset for storing FacefookUserType 
objects.
 It specifies that their id field is their primary key.
 
 #### Example
 
     CREATE INTERNAL DATASET GleambookUsers(GleambookUserType) PRIMARY KEY id;
 
-The next example creates another Internal dataset (the default kind when no 
dataset kind is specified) for storing MyUserTupleType records.
+The next example creates another Internal dataset (the default kind when no 
dataset kind is specified) for storing MyUserTupleType objects.
 It specifies that the id field should be used as the primary key for the 
dataset.
 It also specifies that the id field is an auto-generated field,
-meaning that a randomly generated UUID value should be assigned to each 
incoming record by the system.
+meaning that a randomly generated UUID value should be assigned to each 
incoming object by the system.
 (A user should therefore not attempt to provide a value for this field.)
 Note that the id field's declared type must be UUID in this case.
 
@@ -232,7 +232,7 @@
 
     CREATE DATASET MyUsers(MyUserTupleType) PRIMARY KEY id AUTOGENERATED;
 
-The next example creates an External dataset for querying LineItemType records.
+The next example creates an External dataset for querying LineItemType objects.
 The choice of the `hdfs` adapter means that this dataset's data actually 
resides in HDFS.
 The example CREATE statement also provides parameters used by the hdfs adapter:
 the URL and path needed to locate the data in HDFS and a description of the 
data format.
@@ -264,7 +264,7 @@
 is declared as open **and** if the field's type is provided along with its 
name and if the `ENFORCED` keyword is
 specified at the end of the index definition.
 `ENFORCING` an open field introduces a check that makes sure that the actual 
type of the indexed field
-(if the optional field exists in the record) always matches this specified 
(open) field type.
+(if the optional field exists in the object) always matches this specified 
(open) field type.
 
 The following example creates a btree index called gbAuthorIdx on the authorId 
field of the GleambookMessages dataset.
 This index can be useful for accelerating exact-match queries, range search 
queries, and joins involving the author-id
@@ -282,7 +282,7 @@
     CREATE INDEX gbSendTimeIdx ON GleambookMessages(sendTime: datetime?) TYPE 
BTREE ENFORCED;
 
 The following example creates a btree index called crpUserScrNameIdx on 
screenName,
-a nested field residing within a record-valued user field in the ChirpMessages 
dataset.
+a nested field residing within a object-valued user field in the ChirpMessages 
dataset.
 This index can be useful for accelerating exact-match queries, range search 
queries,
 and joins involving the nested screenName field.
 Such nested fields must be singular, i.e., one cannot index through (or on) an 
array-valued field.
@@ -388,7 +388,7 @@
 This expression can be as simple as a constant expression, or in general it 
can be any legal SQL++ query.
 If the target dataset has an auto-generated primary key field, the insert 
statement should not include a
 value for that field in it.
-(The system will automatically extend the provided record with this additional 
field and a corresponding value.)
+(The system will automatically extend the provided object with this additional 
field and a corresponding value.)
 Insertion will fail if the dataset already has data with the primary key 
value(s) being inserted.
 
 Inserts are processed transactionally by the system.
diff --git a/asterixdb/asterix-doc/src/site/markdown/aql/externaldata.md 
b/asterixdb/asterix-doc/src/site/markdown/aql/externaldata.md
index 5095b97..018125e 100644
--- a/asterixdb/asterix-doc/src/site/markdown/aql/externaldata.md
+++ b/asterixdb/asterix-doc/src/site/markdown/aql/externaldata.md
@@ -34,10 +34,10 @@
 Data that needs to be processed by AsterixDB could be residing outside 
AsterixDB storage. Examples include data files on a distributed file system 
such as HDFS or on the local file system of a machine that is part of an 
AsterixDB cluster. For AsterixDB to process such data, an end-user may create a 
regular dataset in AsterixDB (a.k.a. an internal dataset) and load the dataset 
with the data. AsterixDB also supports ‘‘external datasets’’ so that it is not 
necessary to “load” all data prior to using it. This also avoids creating 
multiple copies of data and the need to keep the copies in sync.
 
 ### <a id="IntroductionAdapterForAnExternalDataset">Adapter for an External 
Dataset</a> <font size="4"><a href="#toc">[Back to TOC]</a></font> ###
-External data is accessed using wrappers (adapters in AsterixDB) that abstract 
away the mechanism of connecting with an external service, receiving its data 
and transforming the data into ADM records that are understood by AsterixDB. 
AsterixDB comes with built-in adapters for common storage systems such as HDFS 
or the local file system.
+External data is accessed using wrappers (adapters in AsterixDB) that abstract 
away the mechanism of connecting with an external service, receiving its data 
and transforming the data into ADM objects that are understood by AsterixDB. 
AsterixDB comes with built-in adapters for common storage systems such as HDFS 
or the local file system.
 
 ### <a id="BuiltinAdapters">Builtin Adapters</a> <font size="4"><a 
href="#toc">[Back to TOC]</a></font> ###
-AsterixDB offers a set of builtin adapters that can be used to query external 
data or for loading data into an internal dataset using a load statement or a 
data feed. Each adapter requires specifying the `format` of the data in order 
to be able to parse records correctly. Using adapters with feeds, the parameter 
`output-type` must also be specified.
+AsterixDB offers a set of builtin adapters that can be used to query external 
data or for loading data into an internal dataset using a load statement or a 
data feed. Each adapter requires specifying the `format` of the data in order 
to be able to parse objects correctly. Using adapters with feeds, the parameter 
`output-type` must also be specified.
 
 Following is a listing of existing built-in adapters and their configuration 
parameters:
 
@@ -76,7 +76,7 @@
 As an example we consider the Lineitem dataset from the [TPCH 
schema](http://www.openlinksw.com/dataspace/doc/dav/wiki/Main/VOSTPCHLinkedData/tpch.sql).
 We assume that you have successfully created an AsterixDB instance following 
the instructions at [Installing AsterixDB Using Managix](../install.html). _For 
constructing an example, we assume a single machine setup.._
 
-Similar to a regular dataset, an external dataset has an associated datatype. 
We shall first create the datatype associated with each record in Lineitem 
data. Paste the following in the
+Similar to a regular dataset, an external dataset has an associated datatype. 
We shall first create the datatype associated with each object in Lineitem 
data. Paste the following in the
 query textbox on the webpage at http://127.0.0.1:19001 and hit ‘Execute’.
 
         create dataverse ExternalFileDemo;
@@ -191,7 +191,7 @@
   <td> The absolute path to the source HDFS file or directory. Use a comma 
separated list if there are multiple files or directories. </td></tr>
 <tr>
   <td> input-format </td>
-  <td> The associated input format. Use 'text-input-format' for text files , 
'sequence-input-format' for hadoop sequence files, 'rc-input-format' for Hadoop 
Record Columnar files, or a fully qualified name of an implementation of 
org.apache.hadoop.mapred.InputFormat. </td>
+  <td> The associated input format. Use 'text-input-format' for text files , 
'sequence-input-format' for hadoop sequence files, 'rc-input-format' for Hadoop 
Object Columnar files, or a fully qualified name of an implementation of 
org.apache.hadoop.mapred.InputFormat. </td>
 </tr>
 <tr>
   <td> format </td>
@@ -203,11 +203,11 @@
 </tr>
 <tr>
   <td> parser </td>
-  <td> The parser used to parse HDFS records if the format is 'binary'. Use 
'hive- parser' for data deserialized by a Hive Serde (AsterixDB can understand 
deserialized Hive objects) or a fully qualified class name of user- implemented 
parser that implements the interface 
org.apache.asterix.external.input.InputParser. </td>
+  <td> The parser used to parse HDFS objects if the format is 'binary'. Use 
'hive- parser' for data deserialized by a Hive Serde (AsterixDB can understand 
deserialized Hive objects) or a fully qualified class name of user- implemented 
parser that implements the interface 
org.apache.asterix.external.input.InputParser. </td>
 </tr>
 <tr>
   <td> hive-serde </td>
-  <td> The Hive serde is used to deserialize HDFS records if format is binary 
and the parser is hive-parser. Use a fully qualified name of a class 
implementation of org.apache.hadoop.hive.serde2.SerDe. </td>
+  <td> The Hive serde is used to deserialize HDFS objects if format is binary 
and the parser is hive-parser. Use a fully qualified name of a class 
implementation of org.apache.hadoop.hive.serde2.SerDe. </td>
 </tr>
 <tr>
   <td> local-socket-path </td>
@@ -218,11 +218,11 @@
 *Difference between 'input-format' and 'format'*
 
 *input-format*: Files stored under HDFS have an associated storage format. For 
example,
-TextInputFormat represents plain text files. SequenceFileInputFormat indicates 
binary compressed files. RCFileInputFormat corresponds to records stored in a 
record columnar fashion. The parameter ‘input-format’ is used to distinguish 
between these and other HDFS input formats.
+TextInputFormat represents plain text files. SequenceFileInputFormat indicates 
binary compressed files. RCFileInputFormat corresponds to objects stored in a 
object columnar fashion. The parameter ‘input-format’ is used to distinguish 
between these and other HDFS input formats.
 
 *format*: The parameter ‘format’ refers to the type of the data contained in 
the file. For example, data contained in a file could be in json or ADM format, 
could be in delimited-text with fields separated by a delimiting character or 
could be in binary format.
 
-As an example. consider the [data file](../data/lineitem.tbl).  The file is a 
text file with each line representing a record. The fields in each record are 
separated by the '|' character.
+As an example. consider the [data file](../data/lineitem.tbl).  The file is a 
text file with each line representing a object. The fields in each object are 
separated by the '|' character.
 
 We assume the HDFS URL to be hdfs://localhost:54310. We further assume that 
the example data file is copied to HDFS at a path denoted by 
“/asterix/Lineitem.tbl”.
 
@@ -231,7 +231,7 @@
 
 #### Using the Hive Parser ####
 
-if a user wants to create an external dataset that uses hive-parser to parse 
HDFS records, it is important that the datatype associated with the dataset 
matches the actual data in the Hive table for the correct initialization of the 
Hive SerDe. Here is the conversion from the supported Hive data types to 
AsterixDB data types:
+if a user wants to create an external dataset that uses hive-parser to parse 
HDFS objects, it is important that the datatype associated with the dataset 
matches the actual data in the Hive table for the correct initialization of the 
Hive SerDe. Here is the conversion from the supported Hive data types to 
AsterixDB data types:
 
 <table>
 <tr>
@@ -280,7 +280,7 @@
 </tr>
 <tr>
   <td>STRUCT</td>
-  <td>Nested Record</td>
+  <td>Nested Object</td>
 </tr>
 <tr>
   <td>LIST</td>
@@ -296,12 +296,12 @@
                create external dataset Lineitem('LineitemType)
                using 
hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/Lineitem.tbl"),("input-format"="text-
 input-format"),("format"="delimited-text"),("delimiter"="|"));
 
-*Example 2*: Here, we create an external dataset of lineitem records stored in 
sequence files that has content in ADM format:
+*Example 2*: Here, we create an external dataset of lineitem objects stored in 
sequence files that has content in ADM format:
 
                create external dataset Lineitem('LineitemType) 
                using 
hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/SequenceLineitem.tbl"),("input-
 format"="sequence-input-format"),("format"="adm"));
 
-*Example 3*: Here, we create an external dataset of lineitem records stored in 
record-columnar files that has content in binary format parsed using 
hive-parser with hive ColumnarSerde:
+*Example 3*: Here, we create an external dataset of lineitem objects stored in 
object-columnar files that has content in binary format parsed using 
hive-parser with hive ColumnarSerde:
 
                create external dataset Lineitem('LineitemType)
                using 
hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/RCLineitem.tbl"),("input-format"="rc-input-format"),("format"="binary"),("parser"="hive-parser"),("hive-
 serde"="org.apache.hadoop.hive.serde2.columnar.ColumnarSerde"));
@@ -336,7 +336,7 @@
 AsterixDB can read all HDFS input formats, but indexes over external datasets 
can currently be built only for HDFS datasets with 'text-input-format', 
'sequence-input-format' or 'rc-input-format'.
 
 ## <a id="ExternalDataSnapshots">External Data Snapshots</a> <font size="4"><a 
href="#toc">[Back to TOC]</a></font> ##
-An external data snapshot represents the status of a dataset's files in HDFS 
at a point in time. Upon creating the first index over an external dataset, 
AsterixDB captures and stores a snapshot of the dataset in HDFS. Only records 
present at the snapshot capture time are indexed, and any additional indexes 
created afterwards will only contain data that was present at the snapshot 
capture time thus preserving consistency across all indexes of a dataset.
+An external data snapshot represents the status of a dataset's files in HDFS 
at a point in time. Upon creating the first index over an external dataset, 
AsterixDB captures and stores a snapshot of the dataset in HDFS. Only objects 
present at the snapshot capture time are indexed, and any additional indexes 
created afterwards will only contain data that was present at the snapshot 
capture time thus preserving consistency across all indexes of a dataset.
 To update all indexes of an external dataset and advance the snapshot time to 
be the present time, a user can use the refresh external dataset command as 
follows:
 
                refresh external dataset DatasetName;
@@ -357,7 +357,7 @@
 
 A. No, queries' results are access path independent and the stored snapshot is 
used to determines which data are going to be included when processing queries.
 
-Q. I created an index over an external dataset and then deleted some of my 
dataset's files in HDFS, Will indexed data access still return the records in 
deleted files?
+Q. I created an index over an external dataset and then deleted some of my 
dataset's files in HDFS, Will indexed data access still return the objects in 
deleted files?
 
 A. No. When AsterixDB accesses external data, with or without the use of 
indexes, it only access files present in the file system at runtime.
 
diff --git a/asterixdb/asterix-doc/src/site/markdown/aql/filters.md 
b/asterixdb/asterix-doc/src/site/markdown/aql/filters.md
index 9a1fc4c..24461f3 100644
--- a/asterixdb/asterix-doc/src/site/markdown/aql/filters.md
+++ b/asterixdb/asterix-doc/src/site/markdown/aql/filters.md
@@ -72,7 +72,7 @@
 be to scan the whole `TweetMessages` dataset and then apply the
 predicate as a post-processing step. However, if disk components of
 the primary index were tagged with the minimum and maximum timestamp
-values of the records they contain, we could utilize the tagged
+values of the objects they contain, we could utilize the tagged
 information to directly access the primary index and prune components
 that do not match the query predicate. Thus, we could save substantial
 cost by avoiding scanning the whole dataset and only access the
diff --git a/asterixdb/asterix-doc/src/site/markdown/aql/manual.md 
b/asterixdb/asterix-doc/src/site/markdown/aql/manual.md
index 393beec..ecdc715 100644
--- a/asterixdb/asterix-doc/src/site/markdown/aql/manual.md
+++ b/asterixdb/asterix-doc/src/site/markdown/aql/manual.md
@@ -66,7 +66,7 @@
                   | FunctionCallExpr
                   | DatasetAccessExpression
                   | ListConstructor
-                  | RecordConstructor
+                  | ObjectConstructor
 
 The most basic building block for any AQL expression is the PrimaryExpr.
 This can be a simple literal (constant) value,
@@ -75,7 +75,7 @@
 a function call,
 an expression accessing the ADM contents of a dataset,
 a newly constructed list of ADM instances,
-or a newly constructed ADM record.
+or a newly constructed ADM object.
 
 #### Literals
 
@@ -168,7 +168,7 @@
     <SPECIALCHARS>          ::= ["$", "_", "-"]
 
 Querying Big Data is the main point of AsterixDB and AQL.
-Data in AsterixDB reside in datasets (collections of ADM records),
+Data in AsterixDB reside in datasets (collections of ADM objects),
 each of which in turn resides in some namespace known as a dataverse (data 
universe).
 Data access in a query expression is accomplished via a 
DatasetAccessExpression.
 Dataset access expressions are most commonly used in FLWOR expressions, where 
variables
@@ -193,21 +193,21 @@
     ListConstructor          ::= ( OrderedListConstructor | 
UnorderedListConstructor )
     OrderedListConstructor   ::= "[" ( Expression ( "," Expression )* )? "]"
     UnorderedListConstructor ::= "{{" ( Expression ( "," Expression )* )? "}}"
-    RecordConstructor        ::= "{" ( FieldBinding ( "," FieldBinding )* )? 
"}"
+    ObjectConstructor        ::= "{" ( FieldBinding ( "," FieldBinding )* )? 
"}"
     FieldBinding             ::= Expression ":" Expression
 
 A major feature of AQL is its ability to construct new ADM data instances.
 This is accomplished using its constructors for each of the major ADM complex 
object structures,
-namely lists (ordered or unordered) and records.
+namely lists (ordered or unordered) and objects.
 Ordered lists are like JSON arrays, while unordered lists have bag (multiset) 
semantics.
-Records are built from attributes that are field-name/field-value pairs, again 
like JSON.
+Objects are built from attributes that are field-name/field-value pairs, again 
like JSON.
 (See the AsterixDB Data Model document for more details on each.)
 
 The following examples illustrate how to construct a new ordered list with 3 
items,
-a new unordered list with 4 items, and a new record with 2 fields, 
respectively.
+a new unordered list with 4 items, and a new object with 2 fields, 
respectively.
 List elements can be homogeneous (as in the first example), which is the 
common case,
 or they may be heterogeneous (as in the second example).
-The data values and field name values used to construct lists and records in 
constructors are all simply AQL expressions.
+The data values and field name values used to construct lists and objects in 
constructors are all simply AQL expressions.
 Thus the list elements, field names, and field values used in constructors can 
be simple literals (as in these three examples)
 or they can come from query variable references or even arbitrarily complex 
AQL expressions.
 
@@ -224,7 +224,7 @@
 
 ##### Note
 
-When constructing nested records there needs to be a space between the closing 
braces to avoid confusion with the `}}` token that ends an unordered list 
constructor:
+When constructing nested objects there needs to be a space between the closing 
braces to avoid confusion with the `}}` token that ends an unordered list 
constructor:
 `{ "a" : { "b" : "c" }}` will fail to parse while `{ "a" : { "b" : "c" } }` 
will work.
 
 ### Path Expressions
@@ -234,13 +234,13 @@
     Index     ::= "[" ( Expression | "?" ) "]"
 
 Components of complex types in ADM are accessed via path expressions.
-Path access can be applied to the result of an AQL expression that yields an 
instance of such a type, e.g., a record or list instance.
-For records, path access is based on field names.
+Path access can be applied to the result of an AQL expression that yields an 
instance of such a type, e.g., a object or list instance.
+For objects, path access is based on field names.
 For ordered lists, path access is based on (zero-based) array-style indexing.
 AQL also supports an "I'm feeling lucky" style index accessor, [?], for 
selecting an arbitrary element from an ordered list.
 Attempts to access non-existent fields or list elements produce a null (i.e., 
missing information) result as opposed to signaling a runtime error.
 
-The following examples illustrate field access for a record, index-based 
element access for an ordered list, and also a composition thereof.
+The following examples illustrate field access for a object, index-based 
element access for an ordered list, and also a composition thereof.
 
 ##### Examples
 
@@ -341,7 +341,7 @@
 
 The next example shows a FLWOR expression that joins two datasets, 
FacebookUsers and FacebookMessages,
 returning user/message pairs.
-The results contain one record per pair, with result records containing the 
user's name and an entire message.
+The results contain one object per pair, with result objects containing the 
user's name and an entire message.
 
 ##### Example
 
@@ -355,7 +355,7 @@
       };
 
 In the next example, a `let` clause is used to bind a variable to all of a 
user's FacebookMessages.
-The query returns one record per user, with result records containing the 
user's name and the set of all messages by that user.
+The query returns one object per user, with result objects containing the 
user's name and the set of all messages by that user.
 
 ##### Example
 
@@ -485,7 +485,7 @@
 
 In addition to expresssions for queries, AQL supports a variety of statements 
for data
 definition and manipulation purposes as well as controlling the context to be 
used in
-evaluating AQL expressions. AQL supports record-level ACID transactions that 
begin and terminate implicitly for each record inserted, deleted, upserted, or 
searched while a given AQL statement is being executed.
+evaluating AQL expressions. AQL supports object-level ACID transactions that 
begin and terminate implicitly for each object inserted, deleted, upserted, or 
searched while a given AQL statement is being executed.
 
 This section details the statements supported in the AQL language.
 
@@ -564,9 +564,9 @@
     TypeSpecification    ::= "type" FunctionOrTypeName IfNotExists "as" 
TypeExpr
     FunctionOrTypeName   ::= QualifiedName
     IfNotExists          ::= ( "if not exists" )?
-    TypeExpr             ::= RecordTypeDef | TypeReference | 
OrderedListTypeDef | UnorderedListTypeDef
-    RecordTypeDef        ::= ( "closed" | "open" )? "{" ( RecordField ( "," 
RecordField )* )? "}"
-    RecordField          ::= Identifier ":" ( TypeExpr ) ( "?" )?
+    TypeExpr             ::= ObjectTypeDef | TypeReference | 
OrderedListTypeDef | UnorderedListTypeDef
+    ObjectTypeDef        ::= ( "closed" | "open" )? "{" ( ObjectField ( "," 
ObjectField )* )? "}"
+    ObjectField          ::= Identifier ":" ( TypeExpr ) ( "?" )?
     NestedField          ::= Identifier ( "." Identifier )*
     IndexField           ::= NestedField ( ":" TypeReference )?
     TypeReference        ::= Identifier
@@ -576,16 +576,16 @@
 The create type statement is used to create a new named ADM datatype.
 This type can then be used to create datasets or utilized when defining one or 
more other ADM datatypes.
 Much more information about the Asterix Data Model (ADM) is available in the 
[data model reference guide](datamodel.html) to ADM.
-A new type can be a record type, a renaming of another type, an ordered list 
type, or an unordered list type.
-A record type can be defined as being either open or closed.
-Instances of a closed record type are not permitted to contain fields other 
than those specified in the create type statement.
-Instances of an open record type may carry additional fields, and open is the 
default for a new type (if neither option is specified).
+A new type can be a object type, a renaming of another type, an ordered list 
type, or an unordered list type.
+A object type can be defined as being either open or closed.
+Instances of a closed object type are not permitted to contain fields other 
than those specified in the create type statement.
+Instances of an open object type may carry additional fields, and open is the 
default for a new type (if neither option is specified).
 
-The following example creates a new ADM record type called FacebookUser type.
+The following example creates a new ADM object type called FacebookUser type.
 Since it is closed, its instances will contain only what is specified in the 
type definition.
 The first four fields are traditional typed name/value pairs.
 The friend-ids field is an unordered list of 32-bit integers.
-The employment field is an ordered list of instances of another named record 
type, EmploymentType.
+The employment field is an ordered list of instances of another named object 
type, EmploymentType.
 
 ##### Example
 
@@ -598,7 +598,7 @@
       "employment" : [ EmploymentType ]
     }
 
-The next example creates a new ADM record type called FbUserType. Note that 
the type of the id field is UUID. You need to use this field type if you want 
to have this field be an autogenerated-PK field. Refer to the Datasets section 
later for more details.
+The next example creates a new ADM object type called FbUserType. Note that 
the type of the id field is UUID. You need to use this field type if you want 
to have this field be an autogenerated-PK field. Refer to the Datasets section 
later for more details.
 
 ##### Example
 
@@ -628,12 +628,12 @@
     PrimaryKey           ::= "primary" "key" Identifier ( "," Identifier )* ( 
"autogenerated ")?
 
 The create dataset statement is used to create a new dataset.
-Datasets are named, unordered collections of ADM record instances; they
+Datasets are named, unordered collections of ADM object instances; they
 are where data lives persistently and are the targets for queries in AsterixDB.
 Datasets are typed, and AsterixDB will ensure that their contents conform to 
their type definitions.
 An Internal dataset (the default) is a dataset that is stored in and managed 
by AsterixDB.
 It must have a specified unique primary key that can be used to partition data 
across nodes of an AsterixDB cluster.
-The primary key is also used in secondary indexes to uniquely identify the 
indexed primary data records. Random primary key (UUID) values can be 
auto-generated by declaring the field to be UUID and putting "autogenerated" 
after the "primary key" identifier. In this case, values for the auto-generated 
PK field should not be provided by the user since it will be auto-generated by 
AsterixDB.
+The primary key is also used in secondary indexes to uniquely identify the 
indexed primary data objects. Random primary key (UUID) values can be 
auto-generated by declaring the field to be UUID and putting "autogenerated" 
after the "primary key" identifier. In this case, values for the auto-generated 
PK field should not be provided by the user since it will be auto-generated by 
AsterixDB.
 Optionally, a filter can be created on a field to further optimize range 
queries with predicates on the filter's field.
 (Refer to [Filter-Based LSM Index Acceleration](filters.html) for more 
information about filters.)
 
@@ -667,19 +667,19 @@
 AsterixDB is the prefix policy except when there is a filter on a dataset, 
where the preferred policy for filters is the correlated-prefix.
 
 
-The following example creates an internal dataset for storing FacefookUserType 
records.
+The following example creates an internal dataset for storing FacefookUserType 
objects.
 It specifies that their id field is their primary key.
 
 ##### Example
     create internal dataset FacebookUsers(FacebookUserType) primary key id;
 
-The following example creates an internal dataset for storing FbUserType 
records.
-It specifies that their id field is their primary key. It also specifies that 
the id field is an auto-generated field, meaning that a randomly generated UUID 
value will be assigned to each record by the system. (A user should therefore 
not proivde a value for this field.) Note that the id field should be UUID.
+The following example creates an internal dataset for storing FbUserType 
objects.
+It specifies that their id field is their primary key. It also specifies that 
the id field is an auto-generated field, meaning that a randomly generated UUID 
value will be assigned to each object by the system. (A user should therefore 
not proivde a value for this field.) Note that the id field should be UUID.
 
 ##### Example
     create internal dataset FbMsgs(FbUserType) primary key id autogenerated;
 
-The next example creates an external dataset for storing LineitemType records.
+The next example creates an external dataset for storing LineitemType objects.
 The choice of the `hdfs` adapter means that its data will reside in HDFS.
 The create statement provides parameters used by the hdfs adapter:
 the URL and path needed to locate the data in HDFS and a description of the 
data format.
@@ -708,7 +708,7 @@
 An index field is not required to be part of the datatype associated with a 
dataset if that datatype is declared as
 open and the field's type is provided along with its type and the `enforced` 
keyword is specified in the end of index definition.
 `Enforcing` an open field will introduce a check that will make sure that the 
actual type of an indexed
-field (if the field exists in the record) always matches this specified (open) 
field type.
+field (if the field exists in the object) always matches this specified (open) 
field type.
 
 The following example creates a btree index called fbAuthorIdx on the 
author-id field of the FacebookMessages dataset.
 This index can be useful for accelerating exact-match queries, range search 
queries, and joins involving the author-id field.
@@ -834,11 +834,10 @@
 If the query part of an insert returns a single object, then the insert 
statement itself will
 be a single, atomic transaction.
 If the query part returns multiple objects, then each object inserted will be 
handled independently
-as a tranaction. If a dataset has an auto-generated primary key field, an 
insert statement should not include a value for that field in it. (The system 
will automatically extend the provided record with this additional field and a 
corresponding value.).
-The optional "as Variable" provides a variable binding for the inserted 
records, which can be used in the "returning" clause.
-The optional "returning Query" allows users to run simple queries/functions on 
the records returned by the insert.
+as a tranaction. If a dataset has an auto-generated primary key field, an 
insert statement should not include a value for that field in it. (The system 
will automatically extend the provided object with this additional field and a 
corresponding value.).
+The optional "as Variable" provides a variable binding for the inserted 
objects, which can be used in the "returning" clause.
+The optional "returning Query" allows users to run simple queries/functions on 
the objects returned by the insert.
 This query cannot refer to any datasets.
-
 
 The following example illustrates a query-based insertion.
 
diff --git a/asterixdb/asterix-doc/src/site/markdown/aql/primer.md 
b/asterixdb/asterix-doc/src/site/markdown/aql/primer.md
index e07edb6..d245158 100644
--- a/asterixdb/asterix-doc/src/site/markdown/aql/primer.md
+++ b/asterixdb/asterix-doc/src/site/markdown/aql/primer.md
@@ -132,12 +132,12 @@
 The first three lines above tell AsterixDB to drop the old TinySocial 
dataverse, if one already
 exists, and then to create a brand new one and make it the focus of the 
statements that follow.
 The first _create type_ statement creates a datatype for holding information 
about Chirp users.
-It is a record type with a mix of integer and string data, very much like a 
(flat) relational tuple.
+It is a object type with a mix of integer and string data, very much like a 
(flat) relational tuple.
 The indicated fields are all mandatory, but because the type is open, 
additional fields are welcome.
 The second statement creates a datatype for Chirp messages; this shows how to 
specify a closed type.
 Interestingly (based on one of Chirp's APIs), each Chirp message actually 
embeds an instance of the
 sending user's information (current as of when the message was sent), so this 
is an example of a nested
-record in ADM.
+object in ADM.
 Chirp messages can optionally contain the sender's location, which is modeled 
via the senderLocation
 field of spatial type _point_; the question mark following the field type 
indicates its optionality.
 An optional field is like a nullable field in SQL---it may be present or 
missing, but when it's present,
@@ -147,11 +147,11 @@
 this field holds a bag (*a.k.a.* an unordered list) of strings.
 Since the overall datatype definition for Chirp messages says "closed", the 
fields that it lists are
 the only fields that instances of this type will be allowed to contain.
-The next two _create type_ statements create a record type for holding 
information about one component of
-the employment history of a Gleambook user and then a record type for holding 
the user information itself.
+The next two _create type_ statements create a object type for holding 
information about one component of
+the employment history of a Gleambook user and then a object type for holding 
the user information itself.
 The Gleambook user type highlights a few additional ADM data model features.
 Its friendIds field is a bag of integers, presumably the Gleambook user ids 
for this user's friends,
-and its employment field is an ordered list of employment records.
+and its employment field is an ordered list of employment objects.
 The final _create type_ statement defines a type for handling the content of a 
Gleambook message in our
 hypothetical social data storage scenario.
 
@@ -243,7 +243,7 @@
 ## Loading Data Into AsterixDB ##
 Okay, so far so good---AsterixDB is now ready for data, so let's give it some 
data to store.
 Our next task will be to load some sample data into the four datasets that we 
just defined.
-Here we will load a tiny set of records, defined in ADM format (a superset of 
JSON), into each dataset.
+Here we will load a tiny set of objects, defined in ADM format (a superset of 
JSON), into each dataset.
 In the boxes below you can see the actual data instances contained in each of 
the provided sample files.
 In order to load this data yourself, you should first store the four 
corresponding `.adm` files
 (whose URLs are indicated on top of each box below) into a filesystem 
directory accessible to your
@@ -307,7 +307,7 @@
         
{"messageId":14,"authorId":9,"inResponseTo":12,"senderLocation":point("41.33,85.28"),"message":"
 love at&t its 3G is good:)"}
         
{"messageId":15,"authorId":7,"inResponseTo":11,"senderLocation":point("44.47,67.11"),"message":"
 like iphone the voicemail-service is awesome"}
 
-It's loading time! We can use AQL _LOAD_ statements to populate our datasets 
with the sample records shown above.
+It's loading time! We can use AQL _LOAD_ statements to populate our datasets 
with the sample objects shown above.
 The following shows how loading can be done for data stored in `.adm` files in 
your local filesystem.
 *Note:* You _MUST_ replace the `<Host Name>` and `<Absolute File Path>` 
placeholders in each load
 statement below with valid values based on the host IP address (or host name) 
for the machine and
@@ -466,7 +466,7 @@
         };
 
 The result of this query is a sequence of new ADM instances, one for each 
author/message pair.
-Each instance in the result will be an ADM record containing two fields, 
"uname" and "message",
+Each instance in the result will be an ADM object containing two fields, 
"uname" and "message",
 containing the user's name and the message text, respectively, for each 
author/message pair.
 (Note that "uname" and "message" are both simple AQL expressions 
themselves---so in the most
 general case, even the resulting field names can be computed as part of the 
query, making AQL
@@ -561,7 +561,7 @@
 
 The AQL language supports nesting, both of queries and of query results, and 
the combination allows for
 an arguably cleaner/more natural approach to such queries.
-As an example, supposed we wanted, for each Gleambook user, to produce a 
record that has his/her name
+As an example, supposed we wanted, for each Gleambook user, to produce a 
object that has his/her name
 plus a list of the messages written by that user.
 In SQL, this would involve a left outer join between users and messages, 
grouping by user, and having
 the user name repeated along side each message.
@@ -578,7 +578,7 @@
         };
 
 This AQL query binds the variable `$user` to the data instances in 
GleambookUsers;
-for each user, it constructs a result record containing a "uname" field with 
the user's
+for each user, it constructs a result object containing a "uname" field with 
the user's
 name and a "messages" field with a nested collection of all messages for that 
user.
 The nested collection for each user is specified by using a correlated 
subquery.
 (Note: While it looks like nested loops could be involved in computing the 
result,
@@ -678,7 +678,7 @@
 The expressive power of AQL includes support for queries involving "some" 
(existentially quantified)
 and "all" (universally quantified) query semantics.
 As an example of an existential AQL query, here we show a query to list the 
Gleambook users who are currently employed.
-Such employees will have an employment history containing a record with the 
endDate value missing, which leads us to the
+Such employees will have an employment history containing a object with the 
endDate value missing, which leads us to the
 following AQL query:
 
         use dataverse TinySocial;
@@ -699,7 +699,7 @@
 
 ### Query 7 - Universal Quantification ###
 As an example of a universal AQL query, here we show a query to list the 
Gleambook users who are currently unemployed.
-Such employees will have an employment history containing no records that miss 
endDate values, leading us to the
+Such employees will have an employment history containing no objects that miss 
endDate values, leading us to the
 following AQL query:
 
         use dataverse TinySocial;
@@ -747,11 +747,11 @@
 with each such group having an associated $uid variable value (i.e., the 
chirping user's screen name).
 In the context of the return clause, due to "... with $cm ...", $uid is bound 
to the chirper's id and $cm
 is bound to the _set_ of chirps issued by that chirper.
-The return clause constructs a result record containing the chirper's user id 
and the count of the items
+The return clause constructs a result object containing the chirper's user id 
and the count of the items
 in the associated chirp set.
-The query result will contain one such record per screen name.
+The query result will contain one such object per screen name.
 This query also illustrates another feature of AQL; notice that each user's 
screen name is accessed via a
-path syntax that traverses each chirp's nested record structure.
+path syntax that traverses each chirp's nested object structure.
 
 Here is the expected result for this query over the sample data:
 
@@ -832,7 +832,7 @@
 
 This query illustrates several things worth knowing in order to write fuzzy 
queries in AQL.
 First, as mentioned earlier, AQL offers an operator-based syntax for seeing 
whether two values are "similar" to one another or not.
-Second, recall that the referredTopics field of records of datatype 
ChirpMessageType is a bag of strings.
+Second, recall that the referredTopics field of objects of datatype 
ChirpMessageType is a bag of strings.
 This query sets the context for its similarity join by requesting that 
Jaccard-based similarity semantics
 
([http://en.wikipedia.org/wiki/Jaccard_index](http://en.wikipedia.org/wiki/Jaccard_index))
 be used for the query's similarity operator and that a similarity index of 0.3 
be used as its similarity threshold.
@@ -881,7 +881,7 @@
 
 In general, the data to be inserted may be specified using any valid AQL query 
expression.
 The insertion of a single object instance, as in this example, is just a 
special case where
-the query expression happens to be a record constructor involving only 
constants.
+the query expression happens to be a object constructor involving only 
constants.
 
 ### Deleting Existing Data  ###
 In addition to inserting new data, AsterixDB supports deletion from datasets 
via the AQL _delete_ statement.
@@ -896,13 +896,13 @@
 
 It should be noted that one form of data change not yet supported by AsterixDB 
is in-place data modification (_update_).
 Currently, only insert and delete operations are supported; update is not.
-To achieve the effect of an update, two statements are currently needed---one 
to delete the old record from the
-dataset where it resides, and another to insert the new replacement record 
(with the same primary key but with
+To achieve the effect of an update, two statements are currently needed---one 
to delete the old object from the
+dataset where it resides, and another to insert the new replacement object 
(with the same primary key but with
 different field values for some of the associated data content).
 
 ### Upserting Data  ###
 In addition to loading, querying, inserting, and deleting data, AsterixDB 
supports upserting
-records using the AQL _upsert_ statement.
+objects using the AQL _upsert_ statement.
 
 The following example deletes the chirp with chirpId = 20 (if one exists) and 
inserts the
 new chirp with chirpId = 20 by user "SwanSmitty" to the ChirpMessages dataset. 
The two
@@ -948,11 +948,11 @@
 Note that such an upsert operation is executed in two steps:
 The query is performed, after which the query's locks are released,
 and then its result is upserted into the dataset.
-This means that a record can be modified between computing the query result 
and performing the upsert.
+This means that a object can be modified between computing the query result 
and performing the upsert.
 
 ### Transaction Support
 
-AsterixDB supports record-level ACID transactions that begin and terminate 
implicitly for each record inserted, deleted, or searched while a given AQL 
statement is being executed. This is quite similar to the level of transaction 
support found in today's NoSQL stores. AsterixDB does not support 
multi-statement transactions, and in fact an AQL statement that involves 
multiple records can itself involve multiple independent record-level 
transactions. An example consequence of this is that, when an AQL statement 
attempts to insert 1000 records, it is possible that the first 800 records 
could end up being committed while the remaining 200 records fail to be 
inserted. This situation could happen, for example, if a duplicate key 
exception occurs as the 801st insertion is attempted. If this happens, 
AsterixDB will report the error (e.g., a duplicate key exception) as the result 
of the offending AQL insert statement, and the application logic above will 
need to take the appropriate 
 action(s) needed to assess the resulting state and to clean up and/or continue 
as appropriate.
+AsterixDB supports object-level ACID transactions that begin and terminate 
implicitly for each object inserted, deleted, or searched while a given AQL 
statement is being executed. This is quite similar to the level of transaction 
support found in today's NoSQL stores. AsterixDB does not support 
multi-statement transactions, and in fact an AQL statement that involves 
multiple objects can itself involve multiple independent object-level 
transactions. An example consequence of this is that, when an AQL statement 
attempts to insert 1000 objects, it is possible that the first 800 objects 
could end up being committed while the remaining 200 objects fail to be 
inserted. This situation could happen, for example, if a duplicate key 
exception occurs as the 801st insertion is attempted. If this happens, 
AsterixDB will report the error (e.g., a duplicate key exception) as the result 
of the offending AQL insert statement, and the application logic above will 
need to take the appropriate 
 action(s) needed to assess the resulting state and to clean up and/or continue 
as appropriate.
 
 ## Further Help ##
 That's it!  You are now armed and dangerous with respect to semistructured 
data management using AsterixDB and AQL.
diff --git a/asterixdb/asterix-doc/src/site/markdown/aql/similarity.md 
b/asterixdb/asterix-doc/src/site/markdown/aql/similarity.md
index 9fa3d44..88ca8a5 100644
--- a/asterixdb/asterix-doc/src/site/markdown/aql/similarity.md
+++ b/asterixdb/asterix-doc/src/site/markdown/aql/similarity.md
@@ -30,7 +30,7 @@
 ## <a id="Motivation">Motivation</a> <font size="4"><a href="#toc">[Back to 
TOC]</a></font> ##
 
 Similarity queries are widely used in applications where users need to
-find records that satisfy a similarity predicate, while exact matching
+find objects that satisfy a similarity predicate, while exact matching
 is not sufficient. These queries are especially important for social
 and Web applications, where errors, abbreviations, and inconsistencies
 are common.  As an example, we may want to find all the movies
@@ -214,7 +214,7 @@
 
 A "keyword index" is constructed on a set of strings or sets (e.g., 
OrderedList, UnorderedList). Instead of
 generating grams as in an ngram index, we generate tokens (e.g., words) and 
for each token, construct an inverted list that includes the ids of the
-records with this token.  The following two examples show how to create 
keyword index on two different types:
+objects with this token.  The following two examples show how to create 
keyword index on two different types:
 
 
 #### Keyword Index on String Type ####
diff --git a/asterixdb/asterix-doc/src/site/markdown/csv.md 
b/asterixdb/asterix-doc/src/site/markdown/csv.md
index d761aaf..48e045a 100644
--- a/asterixdb/asterix-doc/src/site/markdown/csv.md
+++ b/asterixdb/asterix-doc/src/site/markdown/csv.md
@@ -23,15 +23,15 @@
 
 AsterixDB supports the CSV format for both data input and query result
 output. In both cases, the structure of the CSV data must be defined
-using a named ADM record datatype. The CSV format, limitations, and
+using a named ADM object datatype. The CSV format, limitations, and
 MIME type are defined by [RFC
 4180](https://tools.ietf.org/html/rfc4180).
 
 CSV is not as expressive as the full Asterix Data Model, meaning that
 not all data which can be represented in ADM can also be represented
 as CSV. So the form of this datatype is limited. First, obviously it
-may not contain any nested records or lists, as CSV has no way to
-represent nested data structures. All fields in the record type must
+may not contain any nested objects or lists, as CSV has no way to
+represent nested data structures. All fields in the object type must
 be primitive. Second, the set of supported primitive types is limited
 to numerics (`int8`, `int16`, `int32`, `int64`, `float`, `double`) and
 `string`.  On output, a few additional primitive types (`boolean`,
@@ -101,11 +101,11 @@
 ## CSV Output
 
 Any query may be rendered as CSV when using AsterixDB's HTTP
-interface.  To do so, there are two steps required: specify the record
+interface.  To do so, there are two steps required: specify the object
 type which defines the schema of your CSV, and request that Asterix
 use the CSV output format.
 
-#### Output Record Type
+#### Output Object Type
 
 Background: The result of any AQL query is an unordered list of
 _instances_, where each _instance_ is an instance of an AQL
@@ -113,24 +113,24 @@
 the legal datatypes in this unordered list due to the limited
 expressability of CSV:
 
-1. Each instance must be of a record type.
-2. Each instance must be of the _same_ record type.
-3. The record type must conform to the content and type restrictions
+1. Each instance must be of a object type.
+2. Each instance must be of the _same_ object type.
+3. The object type must conform to the content and type restrictions
 mentioned in the introduction.
 
 While it would be possible to structure your query to cast all result
 instances to a given type, it is not necessary. AQL offers a built-in
 feature which will automatically cast all top-level instances in the
-result to a specified named ADM record type. To enable this feature,
+result to a specified named ADM object type. To enable this feature,
 use a `set` statement prior to the query to set the parameter
-`output-record-type` to the name of an ADM type. This type must have
+`output-object-type` to the name of an ADM type. This type must have
 already been defined in the current dataverse.
 
 For example, the following request will ensure that all result
 instances are cast to the `csv_type` type declared earlier:
 
     use dataverse csv;
-    set output-record-type "csv_type";
+    set output-object-type "csv_type";
 
     for $n in dataset "csv_set" return $n;
 
@@ -139,13 +139,13 @@
 complex query where the result values are created by joining fields
 from different underlying datasets, etc.
 
-Two notes about `output-record-type`:
+Two notes about `output-object-type`:
 
 1. This feature is not strictly related to CSV; it may be used with
-any output formats (in which case, any record datatype may be
+any output formats (in which case, any object datatype may be
 specified, not subject to the limitations specified in the
 introduction of this page).
-2. When the CSV output format is requested, `output-record-type` is in
+2. When the CSV output format is requested, `output-object-type` is in
 fact required, not optional. This is because the type is used to
 determine the field names for the CSV header and to ensure that the
 ordering of fields in the output is consistent (which is obviously
@@ -171,14 +171,14 @@
     curl -G "http://localhost:19002/query"; \
         --data-urlencode 'output=CSV' \
         --data-urlencode 'query=use dataverse csv;
-              set output-record-type "csv_type";
+              set output-object-type "csv_type";
               for $n in dataset csv_set return $n;'
 
 Alternately, the same query using the `Accept` header:
 
     curl -G -H "Accept: text/csv" "http://localhost:19002/query"; \
         --data-urlencode 'query=use dataverse csv;
-              set output-record-type "csv_type";
+              set output-object-type "csv_type";
               for $n in dataset csv_set return $n;'
 
 Similarly, a trivial Java program to execute the above sample query
@@ -194,7 +194,7 @@
     public class AsterixExample {
         public static void main(String[] args) throws Exception {
             String query = "use dataverse csv; " +
-                "set output-record-type \"csv_type\";" +
+                "set output-object-type \"csv_type\";" +
                 "for $n in dataset csv_set return $n";
             URL asterix = new URL("http://localhost:19002/query?query="; +
                                   URLEncoder.encode(query, "UTF-8"));
@@ -230,16 +230,16 @@
 
 #### Issues with open datatypes and optional fields
 
-As mentioned earlier, CSV is a rigid format. It cannot express records
+As mentioned earlier, CSV is a rigid format. It cannot express objects
 with different numbers of fields, which ADM allows through both open
 datatypes and optional fields.
 
-If your output record type contains optional fields, this will not
+If your output object type contains optional fields, this will not
 result in any errors. If the output data of a query does not contain
 values for an optional field, this will be represented in CSV as
 `null`.
 
-If your output record type is open, this will also not result in any
+If your output object type is open, this will also not result in any
 errors. If the output data of a query contains any open fields, the
 corresponding rows in the resulting CSV will contain more
 comma-separated values than the others. On each such row, the data
@@ -253,6 +253,6 @@
 CSV processors. Some may throw a parsing error. If you attempt to load
 this data into AsterixDB later using `load dataset`, the extra fields
 will be silently ignored. For this reason it is recommended that you
-use only closed datatypes as output record types. AsterixDB allows to
-use an open record type only to support cases where the type already
+use only closed datatypes as output object types. AsterixDB allows to
+use an open object type only to support cases where the type already
 exists for other parts of your application.
diff --git a/asterixdb/asterix-doc/src/site/markdown/datamodel.md 
b/asterixdb/asterix-doc/src/site/markdown/datamodel.md
index 5a5aced..cba0d18 100644
--- a/asterixdb/asterix-doc/src/site/markdown/datamodel.md
+++ b/asterixdb/asterix-doc/src/site/markdown/datamodel.md
@@ -43,7 +43,7 @@
     * [Null](#IncompleteInformationTypesNull)
     * [Missing](#IncompleteInformationTypesMissing)
 * [Derived Types](#DerivedTypes)
-    * [Record](#DerivedTypesRecord)
+    * [Object](#DerivedTypesObject)
     * [Array](#DerivedTypesArray)
     * [Multiset](#DerivedTypesMultiset)
 
@@ -350,12 +350,12 @@
 
 
 ### <a id="IncompleteInformationTypesMissing">Missing</a> ###
-`missing` represents a missing name-value pair in a record.
+`missing` represents a missing name-value pair in a object.
 If the referenced field does not exist, an empty result value is returned by 
the query.
 
 As neither the data model nor the system enforces homogeneity for datasets or 
collections,
 items in a dataset or collection can be of heterogeneous types and
-so a field can be present in one record and `missing` in another.
+so a field can be present in one object and `missing` in another.
 
  * Example:
 
@@ -366,12 +366,12 @@
 
         {  }
 
-Since a field with value `missing` means the field is absent, we get an empty 
record.
+Since a field with value `missing` means the field is absent, we get an empty 
object.
 
 ## <a id="DerivedTypes">Derived Types</a> ##
 
-### <a id="DerivedTypesRecord">Record</a>###
-A `record` contains a set of fields, where each field is described by its name 
and type. A record type is either open or closed. Open records can contain 
fields that are not part of the type definition, while closed records cannot. 
Syntactically, record constructors are surrounded by curly braces "{...}".
+### <a id="DerivedTypesObject">Object</a>###
+A `object` contains a set of fields, where each field is described by its name 
and type. A object type is either open or closed. Open objects can contain 
fields that are not part of the type definition, while closed objects cannot. 
Syntactically, object constructors are surrounded by curly braces "{...}".
 
 An example would be
 
diff --git a/asterixdb/asterix-doc/src/site/markdown/feeds/tutorial.md 
b/asterixdb/asterix-doc/src/site/markdown/feeds/tutorial.md
index fb06a92..d2f403a 100644
--- a/asterixdb/asterix-doc/src/site/markdown/feeds/tutorial.md
+++ b/asterixdb/asterix-doc/src/site/markdown/feeds/tutorial.md
@@ -34,7 +34,7 @@
 ## <a name="FeedAdaptors">Feed Adaptors</a>  ##
 
 The functionality of establishing a connection with a data source
-and receiving, parsing and translating its data into ADM records
+and receiving, parsing and translating its data into ADM objects
 (for storage inside AsterixDB) is contained in a feed adaptor. A
 feed adaptor is an implementation of an interface and its details are
 specific to a given data source. An adaptor may optionally be given
@@ -229,19 +229,19 @@
 values. An ingestion policy dictates the runtime behavior of
 the feed in response to resource bottlenecks and failures. AsterixDB provides
 a list of policy parameters that help customize the
-system's runtime behavior when handling excess records. AsterixDB
+system's runtime behavior when handling excess objects. AsterixDB
 provides a set of built-in policies, each constructed by setting
 appropriate value(s) for the policy parameter(s) from the table below.
 
 ####Policy Parameters 
 
-- *excess.records.spill*: Set to true if records that cannot be processed by 
an operator for lack of resources (referred to as excess records hereafter) 
should be persisted to the local disk for deferred processing. (Default: false)
+- *excess.objects.spill*: Set to true if objects that cannot be processed by 
an operator for lack of resources (referred to as excess objects hereafter) 
should be persisted to the local disk for deferred processing. (Default: false)
 
-- *excess.records.discard*: Set to true if excess records should be discarded. 
(Default: false)
+- *excess.objects.discard*: Set to true if excess objects should be discarded. 
(Default: false)
 
-- *excess.records.throttle*: Set to true if rate of arrival of records is 
required to be reduced in an adaptive manner to prevent having any excess 
records (Default: false)
+- *excess.objects.throttle*: Set to true if rate of arrival of objects is 
required to be reduced in an adaptive manner to prevent having any excess 
objects (Default: false)
 
-- *excess.records.elastic*: Set to true if the system should attempt to 
resolve resource bottlenecks by re-structuring and/or rescheduling the feed 
ingestion pipeline. (Default: false)
+- *excess.objects.elastic*: Set to true if the system should attempt to 
resolve resource bottlenecks by re-structuring and/or rescheduling the feed 
ingestion pipeline. (Default: false)
 
 - *recover.soft.failure*:  Set to true if the feed must attempt to survive any 
runtime exception. A false value permits an early termination of a feed in such 
an event. (Default: true)
 
@@ -249,7 +249,7 @@
 
 Note that the end user may choose to form a custom policy.  For example,
 it is possible in AsterixDB to create a custom policy that spills excess
-records to disk and subsequently resorts to throttling if the
+objects to disk and subsequently resorts to throttling if the
 spillage crosses a configured threshold. In all cases, the desired
 ingestion policy is specified as part of the `connect feed` statement
 or else the "Basic" policy will be chosen as the default.
diff --git a/asterixdb/asterix-doc/src/site/markdown/sqlpp/primer-sqlpp.md 
b/asterixdb/asterix-doc/src/site/markdown/sqlpp/primer-sqlpp.md
index af63520..7dc1953 100644
--- a/asterixdb/asterix-doc/src/site/markdown/sqlpp/primer-sqlpp.md
+++ b/asterixdb/asterix-doc/src/site/markdown/sqlpp/primer-sqlpp.md
@@ -134,12 +134,12 @@
 The first three lines above tell AsterixDB to drop the old TinySocial 
dataverse, if one already
 exists, and then to create a brand new one and make it the focus of the 
statements that follow.
 The first _CREATE TYPE_ statement creates a datatype for holding information 
about Chirp users.
-It is a record type with a mix of integer and string data, very much like a 
(flat) relational tuple.
+It is a object type with a mix of integer and string data, very much like a 
(flat) relational tuple.
 The indicated fields are all mandatory, but because the type is open, 
additional fields are welcome.
 The second statement creates a datatype for Chirp messages; this shows how to 
specify a closed type.
 Interestingly (based on one of Chirp's APIs), each Chirp message actually 
embeds an instance of the
 sending user's information (current as of when the message was sent), so this 
is an example of a nested
-record in ADM.
+object in ADM.
 Chirp messages can optionally contain the sender's location, which is modeled 
via the senderLocation
 field of spatial type _point_; the question mark following the field type 
indicates its optionality.
 An optional field is like a nullable field in SQL---it may be present or 
missing, but when it's present,
@@ -149,11 +149,11 @@
 this field holds a bag (*a.k.a.* an unordered list) of strings.
 Since the overall datatype definition for Chirp messages says "closed", the 
fields that it lists are
 the only fields that instances of this type will be allowed to contain.
-The next two _CREATE TYPE_ statements create a record type for holding 
information about one component of
-the employment history of a Gleambook user and then a record type for holding 
the user information itself.
+The next two _CREATE TYPE_ statements create a object type for holding 
information about one component of
+the employment history of a Gleambook user and then a object type for holding 
the user information itself.
 The Gleambook user type highlights a few additional ADM data model features.
 Its friendIds field is a bag of integers, presumably the Gleambook user ids 
for this user's friends,
-and its employment field is an ordered list of employment records.
+and its employment field is an ordered list of employment objects.
 The final _CREATE TYPE_ statement defines a type for handling the content of a 
Gleambook message in our
 hypothetical social data storage scenario.
 
@@ -242,14 +242,14 @@
 Second, they show how to escape SQL++ keywords (or other special names) in 
object names by using backquotes.
 Last but not least, they show that SQL++ supports a _SELECT VALUE_ variation 
of SQL's traditional _SELECT_
 statement that returns a single value (or element) from a query instead of 
constructing a new
-record as the query's result like _SELECT_ does; here, the returned value is 
an entire record from
+object as the query's result like _SELECT_ does; here, the returned value is 
an entire object from
 the dataset being queried (e.g., _SELECT VALUE ds_ in the first statement 
returns the entire
-record from the metadata dataset containing the descriptions of all datasets.
+object from the metadata dataset containing the descriptions of all datasets.
 
 ## Loading Data Into AsterixDB ##
 Okay, so far so good---AsterixDB is now ready for data, so let's give it some 
data to store.
 Our next task will be to load some sample data into the four datasets that we 
just defined.
-Here we will load a tiny set of records, defined in ADM format (a superset of 
JSON), into each dataset.
+Here we will load a tiny set of objects, defined in ADM format (a superset of 
JSON), into each dataset.
 In the boxes below you can see the actual data instances contained in each of 
the provided sample files.
 In order to load this data yourself, you should first store the four 
corresponding `.adm` files
 (whose URLs are indicated on top of each box below) into a filesystem 
directory accessible to your
@@ -313,7 +313,7 @@
         
{"messageId":14,"authorId":9,"inResponseTo":12,"senderLocation":point("41.33,85.28"),"message":"
 love at&t its 3G is good:)"}
         
{"messageId":15,"authorId":7,"inResponseTo":11,"senderLocation":point("44.47,67.11"),"message":"
 like iphone the voicemail-service is awesome"}
 
-It's loading time! We can use SQL++ _LOAD_ statements to populate our datasets 
with the sample records shown above.
+It's loading time! We can use SQL++ _LOAD_ statements to populate our datasets 
with the sample objects shown above.
 The following shows how loading can be done for data stored in `.adm` files in 
your local filesystem.
 *Note:* You _MUST_ replace the `<Host Name>` and `<Absolute File Path>` 
placeholders in each load
 statement below with valid values based on the host IP address (or host name) 
for the machine and
@@ -384,7 +384,7 @@
 As in SQL, the query's _FROM_ clause  binds the variable `user` incrementally 
to the data instances residing in
 the dataset named GleambookUsers.
 Its _WHERE_ clause  selects only those bindings having a user id of interest, 
filtering out the rest.
-The _SELECT_ _VALUE_ clause returns the (entire) data value (a Gleambook user 
record in this case)
+The _SELECT_ _VALUE_ clause returns the (entire) data value (a Gleambook user 
object in this case)
 for each binding that satisfies the predicate.
 Since this dataset is indexed on user id (its primary key), this query will be 
done via a quick index lookup.
 
@@ -442,10 +442,10 @@
         WHERE msg.authorId = user.id;
 
 The result of this query is a sequence of new ADM instances, one for each 
author/message pair.
-Each instance in the result will be an ADM record containing two fields, 
"uname" and "message",
+Each instance in the result will be an ADM object containing two fields, 
"uname" and "message",
 containing the user's name and the message text, respectively, for each 
author/message pair.
 Notice how the use of a traditional SQL-style _SELECT_ clause, as opposed to 
the new SQL++ _SELECT VALUE_
-clause, automatically results in the construction of a new record value for 
each result.
+clause, automatically results in the construction of a new object value for 
each result.
 
 The expected result of this example SQL++ join query for our sample data set 
is:
 
@@ -473,9 +473,9 @@
         FROM GleambookUsers user, GleambookMessages msg
         WHERE msg.authorId = user.id;
 
-In SQL++, this _SELECT *_ query will produce a new nested record for each 
user/message pair.
-Each result record contains one field (named after the "user" variable) to 
hold the user record
-and another field (named after the "msg" variable) to hold the matching 
message record.
+In SQL++, this _SELECT *_ query will produce a new nested object for each 
user/message pair.
+Each result object contains one field (named after the "user" variable) to 
hold the user object
+and another field (named after the "msg" variable) to hold the matching 
message object.
 Note that the nested nature of this SQL++ _SELECT *_ result is different than 
traditional SQL,
 as SQL was not designed to handle the richer, nested data model that underlies 
the design of SQL++.
 
@@ -505,7 +505,7 @@
         FROM GleambookUsers user, GleambookMessages msg
         WHERE msg.authorId = user.id;
 
-This version of the query uses an explicit record constructor to build each 
result record.
+This version of the query uses an explicit object constructor to build each 
result object.
 (Note that "uname" and "message" are both simple SQL++ expressions 
themselves---so in the most general case,
 even the resulting field names can be computed as part of the query,
 making SQL++ a very powerful tool for slicing and dicing semistructured data.)
@@ -532,7 +532,7 @@
         WHERE msg.authorId /*+ indexnl */ = user.id;
 
 In addition to illustrating the use of a hint, the query also shows how to 
achieve the same
-result record format using _SELECT_ and _AS_ instead of using an explicit 
record constructor.
+result object format using _SELECT_ and _AS_ instead of using an explicit 
object constructor.
 The expected result is (of course) the same as before, modulo the order of the 
instances.
 Result ordering is (intentionally) undefined in SQL++ in the absence of an 
_ORDER BY_ clause.
 The query result for our sample data in this case is:
@@ -567,7 +567,7 @@
 
 The SQL++ language supports nesting, both of queries and of query results, and 
the combination allows for
 an arguably cleaner/more natural approach to such queries.
-As an example, supposed we wanted, for each Gleambook user, to produce a 
record that has his/her name
+As an example, supposed we wanted, for each Gleambook user, to produce a 
object that has his/her name
 plus a list of the messages written by that user.
 In SQL, this would involve a left outer join between users and messages, 
grouping by user, and having
 the user name repeated along side each message.
@@ -582,7 +582,7 @@
         FROM GleambookUsers user;
 
 This SQL++ query binds the variable `user` to the data instances in 
GleambookUsers;
-for each user, it constructs a result record containing a "uname" field with 
the user's
+for each user, it constructs a result object containing a "uname" field with 
the user's
 name and a "messages" field with a nested collection of all messages for that 
user.
 The nested collection for each user is specified by using a correlated 
subquery.
 (Note: While it looks like nested loops could be involved in computing the 
result,
@@ -673,7 +673,7 @@
 The expressive power of SQL++ includes support for queries involving "some" 
(existentially quantified)
 and "all" (universally quantified) query semantics.
 As an example of an existential SQL++ query, here we show a query to list the 
Gleambook users who are currently employed.
-Such employees will have an employment history containing a record in which 
the end-date field is _MISSING_
+Such employees will have an employment history containing a object in which 
the end-date field is _MISSING_
 (or it could be there but have the value _NULL_, as JSON unfortunately 
provides two ways to represent unknown values).
 This leads us to the following SQL++ query:
 
@@ -695,7 +695,7 @@
 
 ### Query 7 - Universal Quantification ###
 As an example of a universal SQL++ query, here we show a query to list the 
Gleambook users who are currently unemployed.
-Such employees will have an employment history containing no records with 
unknown end-date field values, leading us to the
+Such employees will have an employment history containing no objects with 
unknown end-date field values, leading us to the
 following SQL++ query:
 
         USE TinySocial;
@@ -759,11 +759,11 @@
 with each such group having an associated _uid_ variable value (i.e., the 
chirping user's screen name).
 In the context of the _SELECT_ clause, _uid_ is bound to the chirper's id and 
_cm_
 is now re-bound (due to grouping) to the _set_ of chirps issued by that 
chirper.
-The _SELECT_ clause yields a result record containing the chirper's user id 
and the count of the items
+The _SELECT_ clause yields a result object containing the chirper's user id 
and the count of the items
 in the associated chirp set.
-The query result will contain one such record per screen name.
+The query result will contain one such object per screen name.
 This query also illustrates another feature of SQL++; notice how each user's 
screen name is accessed via a
-path syntax that traverses each chirp's nested record structure.
+path syntax that traverses each chirp's nested object structure.
 
 Here is the expected result for this query over the sample data:
 
@@ -835,7 +835,7 @@
 This query illustrates several things worth knowing in order to write fuzzy 
queries in SQL++.
 First, as mentioned earlier, SQL++ offers an operator-based syntax (as well as 
a functional approach, not shown)
 for seeing whether two values are "similar" to one another or not.
-Second, recall that the referredTopics field of records of datatype 
ChirpMessageType is a bag of strings.
+Second, recall that the referredTopics field of objects of datatype 
ChirpMessageType is a bag of strings.
 This query sets the context for its similarity join by requesting that 
Jaccard-based similarity semantics
 
([http://en.wikipedia.org/wiki/Jaccard_index](http://en.wikipedia.org/wiki/Jaccard_index))
 be used for the query's similarity operator and that a similarity index of 0.3 
be used as its similarity threshold.
@@ -884,7 +884,7 @@
 
 In general, the data to be inserted may be specified using any valid SQL++ 
query expression.
 The insertion of a single object instance, as in this example, is just a 
special case where
-the query expression happens to be a record constructor involving only 
constants.
+the query expression happens to be a object constructor involving only 
constants.
 
 ### Deleting Existing Data  ###
 In addition to inserting new data, AsterixDB supports deletion from datasets 
via the SQL++ _DELETE_ statement.
@@ -898,16 +898,16 @@
 
 It should be noted that one form of data change not yet supported by AsterixDB 
is in-place data modification (_update_).
 Currently, only insert and delete operations are supported in SQL++; updates 
are not.
-To achieve the effect of an update, two SQL++ statements are currently 
needed---one to delete the old record from the
-dataset where it resides, and another to insert the new replacement record 
(with the same primary key but with
+To achieve the effect of an update, two SQL++ statements are currently 
needed---one to delete the old object from the
+dataset where it resides, and another to insert the new replacement object 
(with the same primary key but with
 different field values for some of the associated data content).
-AQL additionally supports an upsert operation to either insert a record, if no 
record with its primary key is currently
-present in the dataset, or to replace the existing record if one already 
exists with the primary key value being upserted.
+AQL additionally supports an upsert operation to either insert a object, if no 
object with its primary key is currently
+present in the dataset, or to replace the existing object if one already 
exists with the primary key value being upserted.
 SQL++ will soon have _UPSERT_ as well.
 
 ### Transaction Support
 
-AsterixDB supports record-level ACID transactions that begin and terminate 
implicitly for each record inserted, deleted, or searched while a given SQL++ 
statement is being executed. This is quite similar to the level of transaction 
support found in today's NoSQL stores. AsterixDB does not support 
multi-statement transactions, and in fact an SQL++ statement that involves 
multiple records can itself involve multiple independent record-level 
transactions. An example consequence of this is that, when an SQL++ statement 
attempts to insert 1000 records, it is possible that the first 800 records 
could end up being committed while the remaining 200 records fail to be 
inserted. This situation could happen, for example, if a duplicate key 
exception occurs as the 801st insertion is attempted. If this happens, 
AsterixDB will report the error (e.g., a duplicate key exception) as the result 
of the offending SQL++ _INSERT_ statement, and the application logic above will 
need to take the ap
 propriate action(s) needed to assess the resulting state and to clean up 
and/or continue as appropriate.
+AsterixDB supports object-level ACID transactions that begin and terminate 
implicitly for each object inserted, deleted, or searched while a given SQL++ 
statement is being executed. This is quite similar to the level of transaction 
support found in today's NoSQL stores. AsterixDB does not support 
multi-statement transactions, and in fact an SQL++ statement that involves 
multiple objects can itself involve multiple independent object-level 
transactions. An example consequence of this is that, when an SQL++ statement 
attempts to insert 1000 objects, it is possible that the first 800 objects 
could end up being committed while the remaining 200 objects fail to be 
inserted. This situation could happen, for example, if a duplicate key 
exception occurs as the 801st insertion is attempted. If this happens, 
AsterixDB will report the error (e.g., a duplicate key exception) as the result 
of the offending SQL++ _INSERT_ statement, and the application logic above will 
need to take the ap
 propriate action(s) needed to assess the resulting state and to clean up 
and/or continue as appropriate.
 
 ## Further Help ##
 That's it! You are now armed and dangerous with respect to semistructured data 
management using AsterixDB via SQL++.
diff --git a/asterixdb/asterix-doc/src/site/markdown/udf.md 
b/asterixdb/asterix-doc/src/site/markdown/udf.md
index 0e1db87..b2ef2bc 100644
--- a/asterixdb/asterix-doc/src/site/markdown/udf.md
+++ b/asterixdb/asterix-doc/src/site/markdown/udf.md
@@ -78,12 +78,12 @@
 In the following we assume that you already created the `TwitterFeed` and its 
corresponding data types and dataset following the instruction explained in the 
[feeds tutorial](feeds/tutorial.html).
 
 A feed definition may optionally include the specification of a
-user-defined function that is to be applied to each feed record prior
+user-defined function that is to be applied to each feed object prior
 to persistence. Examples of pre-processing might include adding
-attributes, filtering out records, sampling, sentiment analysis, feature
+attributes, filtering out objects, sampling, sentiment analysis, feature
 extraction, etc. We can express a UDF, which can be defined in AQL or in a 
programming
 language such as Java, to perform such pre-processing. An AQL UDF is a good 
fit when
-pre-processing a record requires the result of a query (join or aggregate)
+pre-processing a object requires the result of a query (join or aggregate)
 over data contained in AsterixDB datasets. More sophisticated
 processing such as sentiment analysis of text is better handled
 by providing a Java UDF. A Java UDF has an initialization phase
@@ -145,9 +145,9 @@
 introduce the notion of primary and secondary feeds in AsterixDB.
 
 A feed in AsterixDB is considered to be a primary feed if it gets
-its data from an external data source. The records contained in a
+its data from an external data source. The objects contained in a
 feed (subsequent to any pre-processing) are directed to a designated
-AsterixDB dataset. Alternatively or additionally, these records can
+AsterixDB dataset. Alternatively or additionally, these objects can
 be used to derive other feeds known as secondary feeds. A secondary
 feed is similar to its parent feed in every other aspect; it can
 have an associated UDF to allow for any subsequent processing,
@@ -167,7 +167,7 @@
 
         connect feed ProcessedTwitterFeed to dataset ProcessedTweets;
 
-The `addHashTags` function is already provided in the example UDF.To see what 
records
+The `addHashTags` function is already provided in the example UDF.To see what 
objects
 are being inserted into the dataset, we can perform a simple dataset scan after
 allowing a few moments for the feed to start ingesting data:
 

-- 
To view, visit https://asterix-gerrit.ics.uci.edu/1295
To unsubscribe, visit https://asterix-gerrit.ics.uci.edu/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Idcb2be81d1bfa37dd876cd36a7a5bb824bc3ab86
Gerrit-PatchSet: 1
Gerrit-Project: asterixdb
Gerrit-Branch: master
Gerrit-Owner: Yingyi Bu <buyin...@gmail.com>

Reply via email to