http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/query_select/the_where_clause.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/query_select/the_where_clause.html.md.erb 
b/geode-docs/developing/query_select/the_where_clause.html.md.erb
deleted file mode 100644
index fd2405e..0000000
--- a/geode-docs/developing/query_select/the_where_clause.html.md.erb
+++ /dev/null
@@ -1,336 +0,0 @@
----
-title:  WHERE Clause
----
-
-<a id="the_where_clause__section_56BB3A7F44124CA9BFBC20E19399C6E4"></a>
-Each FROM clause expression must resolve to a collection of objects. The 
collection is then available for iteration in the query expressions that follow 
in the WHERE clause.
-
-For example:
-
-``` pre
-SELECT DISTINCT * FROM /exampleRegion p WHERE p.status = 'active'
-```
-
-The entry value collection is iterated by the WHERE clause, comparing the 
status field to the string 'active'. When a match is found, the value object of 
the entry is added to the return set.
-
-In the next example query, the collection specified in the first FROM clause 
expression is used by the rest of the SELECT statement, including the second 
FROM clause expression.
-
-``` pre
-SELECT DISTINCT * FROM /exampleRegion, positions.values p WHERE p.qty > 1000.00
-```
-
-## <a id="the_where_clause__section_99CA3FA508B740DCBAB4F01F8F9B1390" 
class="no-quick-link"></a>Implementing equals and hashCode Methods
-
-You must implement the `equals` and `hashCode` methods in your custom objects 
if you are doing ORDER BY and DISTINCT queries on the objects. The methods must 
conform to the properties and behavior documented in the online Java API 
documentation for `java.lang.Object`. Inconsistent query results may occur if 
these methods are absent.
-
-If you have implemented `equals` and `hashCode` methods in your custom 
objects, you must provide detailed implementations of these methods so that 
queries execute properly against the objects. For example, assume that you have 
defined a custom object (CustomObject) with the following variables:
-
-``` pre
-int ID
-int otherValue
-```
-
-Let's put two CustomObjects (we'll call them CustomObjectA and CustomObjectB) 
into the cache:
-
-CustomObjectA:
-
-``` pre
-ID=1
-otherValue=1
-```
-
-CustomObjectB:
-
-``` pre
-ID=1
-otherValue=2
-```
-
-If you have implemented the equals method to simply match on the ID field (ID 
== ID), queries will produce unpredictable results.
-
-The following query:
-
-``` pre
-SELECT * FROM /CustomObjects c 
-WHERE c.ID > 1 AND c.ID < 3 
-AND c.otherValue > 0 AND c.otherValue < 3
-```
-
-returns two objects, however the objects will be two of either CustomObjectA 
or CustomObjectB.
-
-Alternately, the following query:
-
-``` pre
-SELECT * FROM /CustomObjects c 
-WHERE c.ID > 1 AND c.ID < 3 
-AND c.otherValue > 1 AND c.otherValue < 3
-```
-
-returns either 0 results or 2 results of CustomObjectB, depending on which 
entry is evaluated last.
-
-To avoid unpredictable querying behavior, implement detailed versions of the 
`equals` and `hashCode` methods.
-
-If you are comparing a non-primitive field of the object in the WHERE clause, 
use the `equals` method instead of the `=` operator. For example instead of 
`nonPrimitiveObj = objToBeCompared` use 
`nonPrimitiveObj.equals(objToBeCompared)`.
-
-## <a id="the_where_clause__section_7484AD999D01473385628246697F37F6" 
class="no-quick-link"></a>Querying Serialized Objects
-
-Objects must implement serializable if you will be querying partitioned 
regions or if you are performing client-server querying.
-
-If you are using PDX serialization, you can access the values of individual 
fields without having to deserialize the entire object. This is accomplished by 
using PdxInstance, which is a wrapper around the serialized stream. The 
PdxInstance provides a helper method that takes field-name and returns the 
value without deserializing the object. While evaluating the query, the query 
engine will access field values by calling the getField method thus avoiding 
deserialization.
-
-To use PdxInstances in querying, ensure that PDX serialization reads are 
enabled in your server's cache. In gfsh, execute the following command before 
starting up your data members:
-
-``` pre
-gfsh>configure pdx --read-serialized=true
-```
-
-See [configure 
pdx](../../tools_modules/gfsh/command-pages/configure.html#topic_jdkdiqbgphqh) 
for more information.
-
-In cache.xml, set the following:
-
-``` pre
-// Cache configuration setting PDX read behavior 
-<cache>
-  <pdx read-serialized="true">
-  ...
-  </pdx>
-</cache>
-```
-
-## <a id="the_where_clause__section_75A114F9FEBF40A586621CAA1780DBD3" 
class="no-quick-link"></a>Attribute Visibility
-
-You can access any object or object attribute that is available in the current 
scope of a query. In querying, an object's attribute is any identifier that can 
be mapped to a public field or method in the object. In the FROM specification, 
any object that is in scope is valid. Therefore, at the beginning of a query, 
all locally cached regions and their attributes are in scope.
-
-For attribute Position.secId which is public and has getter method 
"getSecId()", the query can be written as the following:
-
-``` pre
-SELECT DISTINCT * FROM /exampleRegion p WHERE p.position1.secId = '1'
-SELECT DISTINCT * FROM /exampleRegion p WHERE p.position1.SecId = '1'
-SELECT DISTINCT * FROM /exampleRegion p WHERE p.position1.getSecId() = '1'
-```
-
-The query engine tries to evaluate the value using the public field value. If 
a public field value is not found, it makes a get call using field name (note 
that the first character is uppercase.)
-
-## <a id="the_where_clause__section_EB7B976238104C0EACD959C52E5BD75B" 
class="no-quick-link"></a>Joins
-
-If collections in the FROM clause are not related to each other, the WHERE 
clause can be used to join them.
-
-The statement below returns all portfolios from the /exampleRegion and 
/exampleRegion2 regions that have the same status.
-
-``` pre
-SELECT * FROM /exampleRegion portfolio1, /exampleRegion2 portfolio2 WHERE 
portfolio1.status = portfolio2.status
-```
-
-To create indexes for region joins you create single-region indexes for both 
sides of the join condition. These are used during query execution for the join 
condition. Partitioned regions do not support region joins. For more 
information on indexes, see [Working with 
Indexes](../query_index/query_index.html).
-
-**Examples:**
-
-Query two regions. Return the ID and status for portfolios that have the same 
status.
-
-``` pre
-SELECT portfolio1.ID, portfolio2.status FROM /exampleRegion portfolio1, 
/exampleRegion2 portfolio2 WHERE portfolio1.status = portfolio2.status
-```
-
-Query two regions, iterating over all `positions` within each portfolio. 
Return all 4-tuples consisting of the value from each of the two regions and 
the value portion of the `positions` map from both regions in which the `secId` 
field of positions match.
-
-``` pre
-SELECT * FROM /exampleRegion portfolio1, portfolio1.positions.values 
positions1, /exampleRegion2 portfolio2, portfolio2.positions.values positions2 
WHERE positions1.secId = positions2.secId
-```
-
-Same query as the previous example, with the additional constraint that 
matches will have a `ID` of 1.
-
-``` pre
-SELECT * FROM /exampleRegion portfolio1, portfolio1.positions.values 
positions1, /exampleRegion2 portfolio2, portfolio2.positions.values positions2 
WHERE portfolio1.ID = 1 AND positions1.secId = positions2.secId
-```
-
-## <a id="the_where_clause__section_D91E0B06FFF6431490CC0BFA369425AD" 
class="no-quick-link"></a>LIKE
-
-Geode offers limited support for the LIKE predicate. LIKE can be used to mean 
'equals to'. If you terminate the string with a wildcard ('%'), it behaves like 
'starts with'. You can also place a wildcard (either '%' or '\_') at any other 
position in the comparison string. You can escape the wildcard characters to 
represent the characters themselves.
-
-**Note:**
-The '\*' wildcard is not supported in OQL LIKE predicates.
-
-You can also use the LIKE predicate when an index is present.
-
-**Examples:**
-
-Query the region. Return all objects where status equals 'active':
-
-``` pre
-SELECT * FROM /exampleRegion p WHERE p.status LIKE 'active'
-```
-
-Query the region using a wild card for comparison. Returns all objects where 
status begins with 'activ':
-
-``` pre
-SELECT * FROM /exampleRegion p WHERE p.status LIKE 'activ%'
-```
-
-## Case Insensitive Fields
-
-You can use the Java String class methods `toUpperCase` and `toLowerCase` to 
transform fields where you want to perform a case-insensitive search. For 
example:
-
-``` pre
-SELECT entry.value FROM /exampleRegion.entries entry WHERE 
entry.value.toUpperCase LIKE '%BAR%'
-```
-
-or
-
-``` pre
-SELECT * FROM /exampleRegion WHERE foo.toLowerCase LIKE '%bar%'
-```
-
-## <a id="the_where_clause__section_D2F8D17B52B04895B672E2FCD675A676" 
class="no-quick-link"></a>Method Invocations
-
-To use a method in a query, use the attribute name that maps to the public 
method you want to invoke.
-
-``` pre
-SELECT DISTINCT * FROM /exampleRegion p WHERE p.positions.size >= 2 - maps to 
positions.size()
-```
-
-Methods declared to return void evaluate to null when invoked through the 
query processor.
-
-You cannot invoke a static method. See [Enum 
Objects](the_where_clause.html#the_where_clause__section_59E7D64746AE495D942F2F09EF7DB9B5)
 for more information.
-
-**Methods without parameters**
-
-If the attribute name maps to a public method that takes no parameters, just 
include the method name in the query string as an attribute. For example, 
emps.isEmpty is equivalent to emps.isEmpty().
-
-In the following example, the query invokes isEmpty on positions, and returns 
the set of all portfolios with no positions:
-
-``` pre
-SELECT DISTINCT * FROM /exampleRegion p WHERE p.positions.isEmpty
-```
-
-**Methods with parameters**
-
-To invoke methods with parameters, include the method name in the query string 
as an attribute and provide method arguments between parentheses.
-
-This example passes the argument "Bo" to the public method, and returns all 
names that begin with "Bo".
-
-``` pre
-SELECT DISTINCT * FROM /exampleRegion p WHERE p.name.startsWith('Bo')
-```
-
-For overloaded methods, the query processor decides which method to call by 
matching the runtime argument types with the parameter types required by the 
method. If only one method's signature matches the parameters provided, it is 
invoked. The query processor uses runtime types to match method signatures.
-
-If more than one method can be invoked, the query processor chooses the method 
whose parameter types are the most specific for the given arguments. For 
example, if an overloaded method includes versions with the same number of 
arguments, but one takes a Person type as an argument and the other takes an 
Employee type, derived from Person, Employee is the more specific object type. 
If the argument passed to the method is compatible with both types, the query 
processor uses the method with the Employee parameter type.
-
-The query processor uses the runtime types of the parameters and the receiver 
to determine the proper method to invoke. Because runtime types are used, an 
argument with a null value has no typing information, and so can be matched 
with any object type parameter. When a null argument is used, if the query 
processor cannot determine the proper method to invoke based on the non-null 
arguments, it throws an `AmbiguousNameException`.
-
-## <a id="the_where_clause__section_59E7D64746AE495D942F2F09EF7DB9B5" 
class="no-quick-link"></a>Enum Objects
-
-To write a query based on the value of an Enum object field, you must use the 
`toString` method of the enum object or use a query bind parameter.
-
-For example, the following query is NOT valid:
-
-``` pre
-//INVALID QUERY
-select distinct * from /QueryRegion0 where aDay = Day.Wednesday
-```
-
-The reason it is invalid is that the call to `Day.Wednesday` involves a static 
class and method invocation which is not supported.
-
-Enum types can be queried by using toString method of the enum object or by 
using bind parameter. When you query using the toString method, you must 
already know the constraint value that you wish to query. In the following 
first example, the known value is 'active'.
-
-**Examples:**
-
-Query enum type using the toString method:
-
-``` pre
-// eStatus is an enum with values 'active' and 'inactive'
-select * from /exampleRegion p where p.eStatus.toString() = 'active'
-```
-
-Query enum type using a bind parameter. The value of the desired Enum field ( 
Day.Wednesday) is passed as an execution parameter:
-
-``` pre
-select distinct * from /QueryRegion0 where aDay = $1
-```
-
-## <a id="the_where_clause__section_AC12146509F141378E493078540950C7" 
class="no-quick-link"></a>IN and SET
-
-The IN expression is a boolean indicating if one expression is present inside 
a collection of expressions of compatible type. The determination is based on 
the expressions' equals semantics.
-
-If `e1` and `e2` are expressions, `e2` is a collection, and `e1` is an object 
or a literal whose type is a subtype or the same type as the elements of `e2`, 
then `e1 IN                     e2` is an expression of type boolean.
-
-The expression returns:
-
--   TRUE if e1 is not UNDEFINED and is contained in collection e2
--   FALSE if e1 is not UNDEFINED and is not contained in collection e2 \#
--   UNDEFINED if e1 is UNDEFINED
-
-For example, `2 IN SET(1, 2, 3)` is TRUE.
-
-Another example is when the collection you are querying into is defined by a 
subquery. This query looks for companies that have an active portfolio on file:
-
-``` pre
-SELECT name, address FROM /company 
-  WHERE id IN (SELECT id FROM /portfolios WHERE status = 'active')
-```
-
-The interior SELECT statement returns a collection of ids for all /portfolios 
entries whose status is active. The exterior SELECT iterates over /company, 
comparing each entry’s id with this collection. For each entry, if the IN 
expression returns TRUE, the associated name and address are added to the outer 
SELECT’s collection.
-
-**Comparing Set Values**
-
-The following is an example of a set value type comparison where sp is of type 
Set:
-
-``` pre
-SELECT * FROM /exampleRegion WHERE sp = set('20','21','22')
-```
-
-In this case, if sp only contains '20' and '21', then the query will evalute 
to false. The query compares the two sets and looks for the presence of all 
elements in both sets.
-
-For other collections types like list, the query can be written as follows:
-
-``` pre
-SELECT * FROM /exampleRegion WHERE sp.containsAll(set('20','21','22))
-```
-
-where sp is of type List.
-
-In order to use it for Set value, the query can be written as:
-
-``` pre
-SELECT * FROM /exampleRegion WHERE sp IN SET 
(set('20','21','22'),set('10',11','12'))
-```
-
-where a set value is searched in collection of set values.
-
-One problem is that you cannot create indexes on Set or List types (collection 
types) that are not comparable. To workaround this, you can create an index on 
a custom collection type that implements Comparable.
-
-## <a id="the_where_clause__section_E7206D045BEC4F67A8D2B793922BF213" 
class="no-quick-link"></a>Double.NaN and Float.NaN Comparisons
-
-The comparison behavior of Double.NaN and Float.NaN within Geode queries 
follow the semantics of the JDK methods Float.compareTo and Double.compareTo.
-
-In summary, the comparisons differ in the following ways from those performed 
by the Java language numerical comparison operators (<, <=, ==, >= >) when 
applied to primitive double [float] values:
-
--   Double.NaN \[Float.NaN\] is considered to be equal to itself and greater 
than all other double \[float\] values (including Double.POSITIVE\_INFINITY 
\[Float.POSITIVE\_INFINITY\]).
--   0.0d \[0.0f\] is considered by this method to be greater than -0.0d 
\[-0.0f\].
-
-Therefore, Double.NaN\[Float.NaN\] is considered to be larger than 
Double.POSITIVE\_INFINITY\[Float.POSITIVE\_INFINITY\]. Here are some example 
queries and what to expect.
-
-| If p.value is NaN, the following query:                                      
                  | Evaluates to:     | Appears in the result set?     |
-|------------------------------------------------------------------------------------------------|-------------------|--------------------------------|
-| `SELECT * FROM /positions p WHERE                                         
p.value = 0`         | false             | no                             |
-| `SELECT * FROM /positions p WHERE                                         
p.value > 0`         | true              | yes                            |
-| `SELECT * FROM /positions p WHERE                                         
p.value >= 0`        | true              | yes                            |
-| `SELECT * FROM /positions p WHERE                                         
p.value < 0`         | false             | no                             |
-| `SELECT * FROM /positions p WHERE                                         
p.value <= 0`        | false             | no                             |
-| **When p.value and p.value1 are both NaN, the following query:**             
                  | **Evaluates to:** | **Appears in the result set:** |
-| `SELECT * FROM /positions p WHERE                                         
p.value = p.value1 ` | true              | yes                            |
-
-If you combine values when defining the following query in your code, when the 
query is executed the value itself is considered UNDEFINED when parsed and will 
not be returned in the result set.
-
-``` pre
-String query = "SELECT * FROM /positions p WHERE p.value =" + Float.NaN
-```
-
-Executing this query, the value itself is considered UNDEFINED when parsed and 
will not be returned in the result set.
-
-To retrieve NaN values without having another field already stored as NaN, you 
can define the following query in your code:
-
-``` pre
-String query = "SELECT * FROM /positions p WHERE p.value > " + Float.MAX_VALUE;
-        
-```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/querying_basics/chapter_overview.html.md.erb 
b/geode-docs/developing/querying_basics/chapter_overview.html.md.erb
deleted file mode 100644
index 27611d0..0000000
--- a/geode-docs/developing/querying_basics/chapter_overview.html.md.erb
+++ /dev/null
@@ -1,21 +0,0 @@
----
-title:  Querying
----
-
-Geode provides a SQL-like querying language called OQL that allows you to 
access data stored in Geode regions.
-
-Since Geode regions are key-value stores where values can range from simple 
byte arrays to complex nested objects, Geode uses a query syntax based on OQL 
(Object Query Language) to query region data. OQL is very similar to SQL, but 
OQL allows you to query complex objects, object attributes, and methods.
-
--   **[Geode Querying FAQ and 
Examples](../../getting_started/querying_quick_reference.html)**
-
-    This topic answers some frequently asked questions on querying 
functionality. It provides examples to help you get started with Geode querying.
-
--   **[Basic Querying](../../developing/querying_basics/query_basics.html)**
-
-    This section provides a high-level introduction to Geode querying such as 
building a query string and describes query language features.
-
--   **[Advanced 
Querying](../../developing/query_additional/advanced_querying.html)**
-
-    This section includes advanced querying topics such as using query 
indexes, using query bind parameters, querying partitioned regions and query 
debugging.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/comments_in_query_strings.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/querying_basics/comments_in_query_strings.html.md.erb 
b/geode-docs/developing/querying_basics/comments_in_query_strings.html.md.erb
deleted file mode 100644
index 5125609..0000000
--- 
a/geode-docs/developing/querying_basics/comments_in_query_strings.html.md.erb
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title:  Comments in Query Strings
----
-
-Comment lines being with `--` (double dash). Comment blocks begin with `/*` 
and end with `*/`. For example:
-
-``` pre
-SELECT * --my comment 
-FROM /exampleRegion /* here is
-a comment */ WHERE status = ‘active’
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/monitor_queries_for_low_memory.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/querying_basics/monitor_queries_for_low_memory.html.md.erb
 
b/geode-docs/developing/querying_basics/monitor_queries_for_low_memory.html.md.erb
deleted file mode 100644
index 3064b70..0000000
--- 
a/geode-docs/developing/querying_basics/monitor_queries_for_low_memory.html.md.erb
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: Monitoring Queries for Low Memory
----
-
-<a id="topic_685CED6DE7D0449DB8816E8ABC1A6E6F"></a>
-
-
-The query monitoring feature prevents out-of-memory exceptions from occurring 
when you execute queries or create indexes.
-
-This feature is automatically enabled when you set a 
`critical-heap-percentage` attribute for the resource-manager element in 
cache.xml or by using the 
`cache.getResourceManager().setCriticalHelpPercentage(float                     
heapPercentage)` API. Use this feature to cancel out queries that are taking 
too long and to warn the user that there are low memory conditions when they 
are running queries or creating indexes.
-
-You can override this feature by setting the system property 
`gemfire.cache.DISABLE_QUERY_MONITOR_FOR_LOW_MEMORY` to true.
-
-When the query memory monitoring feature is on, the default query time out is 
set to five hours. You can override this value by setting a larger or smaller, 
non -1 value to the existing query time out system variable 
`gemfire.cache.MAX_QUERY_EXECUTION_TIME`.
-
-When system memory is low (as determined by the critical heap percentage 
threshold that you defined in cache.xml or in the getResourceManager API ), 
queries will throw a `QueryExecutionLowMemoryException`. Any indexes that are 
in the process of being created will throw an `InvalidIndexException` with the 
message indicating the reason.
-
-## <a 
id="topic_685CED6DE7D0449DB8816E8ABC1A6E6F__section_2E9DEEC9D9C94D038543DDE03BC60B20"
 class="no-quick-link"></a>Partitioned Region Queries and Low Memory
-
-Partitioned region queries are likely causes for out-of-memory exceptions. If 
query monitoring is enabled, partitioned region queries drop or ignore results 
that are being gathered by other servers if the executing server is low in 
memory.
-
-Query-monitoring does not address a scenario in which a low-level collection 
is expanded while the partitioned region query is gathering results. For 
example, if a row is added and then causes a Java level collection or array to 
expand, it is possible to then encounter an out-of-memory exception. This 
scenario is rare and is only possible if the collection size itself expands 
before a low memory condition is met and then expands beyond the remaining 
available memory. As a workaround, in the event that you encounter this 
situation, you may be able to tune the system by additionally lowering the 
`critical-heap-percentage`.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/oql_compared_to_sql.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/querying_basics/oql_compared_to_sql.html.md.erb 
b/geode-docs/developing/querying_basics/oql_compared_to_sql.html.md.erb
deleted file mode 100644
index df13209..0000000
--- a/geode-docs/developing/querying_basics/oql_compared_to_sql.html.md.erb
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title:  Advantages of OQL
----
-
-The following list describes some of the advantages of using an OQL-based 
querying language:
-
--   You can query on any arbitrary object
--   You can navigate object collections
--   You can invoke methods and access the behavior of objects
--   Data mapping is supported
--   You are not required to declare types. Since you do not need type 
definitions, you can work across multiple languages
--   You are not constrained by a schema
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/performance_considerations.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/querying_basics/performance_considerations.html.md.erb 
b/geode-docs/developing/querying_basics/performance_considerations.html.md.erb
deleted file mode 100644
index b37e529..0000000
--- 
a/geode-docs/developing/querying_basics/performance_considerations.html.md.erb
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title:  Performance Considerations
----
-
-This topic covers considerations for improving query performance.
-
-<a 
id="performance_considerations__section_2DA52BD8C72A4D01982CA8A44954ADAF"></a>
-Some general performance tips:
-
--   Improve query performance whenever possible by creating indexes. See [Tips 
and Guidelines on Using 
Indexes](../query_index/indexing_guidelines.html#indexing_guidelines) for some 
scenarios for using indexes.
--   Use bind parameters for frequently used queries. When you use a bind 
parameter, the query is compiled once. This improves the subsequent performance 
of the query when it is re-run. See [Using Query Bind 
Parameters](../query_additional/using_query_bind_parameters.html#concept_173E775FE46B47DF9D7D1E40680D34DF)
 for more details.
--   When querying partitioned regions, execute the query using the 
FunctionService. This function allows you to target a particular node, which 
will improve performance greatly by avoiding query distribution. See [Querying 
a Partitioned Region on a Single 
Node](../query_additional/query_on_a_single_node.html#concept_30B18A6507534993BD55C2C9E0544A97)
 for more information.
--   Use key indexes when querying data that has been partitioned by a key or 
field value. See [Optimizing Queries on Data Partitioned by a Key or Field 
Value](../query_additional/partitioned_region_key_or_field_value.html#concept_3010014DFBC9479783B2B45982014454).
--   The size of a query result set depends on the restrictiveness of the query 
and the size of the total data set. A partitioned region can hold much more 
data than other types of regions, so there is more potential for larger result 
sets on partitioned region queries. This could cause the member receiving the 
results to run out of memory if the result set is very large.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/query_basics.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/querying_basics/query_basics.html.md.erb 
b/geode-docs/developing/querying_basics/query_basics.html.md.erb
deleted file mode 100644
index 89324f7..0000000
--- a/geode-docs/developing/querying_basics/query_basics.html.md.erb
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title:  Basic Querying
----
-
-This section provides a high-level introduction to Geode querying such as 
building a query string and describes query language features.
-
-<a id="querying_with_oql__section_828A9660B5014DCAA883A58A45E6B51A"></a>
-Geode provides a SQL-like querying language that allows you to access data 
stored in Geode regions. Since Geode regions are key-value stores where values 
can range from simple byte arrays to complex nested objects, Geode uses a query 
syntax based on OQL (Object Query Language) to query region data. OQL and SQL 
have many syntactical similarities, however they have significant differences. 
For example, while OQL does not offer all of the capabilities of SQL like 
aggregates, OQL does allow you to execute queries on complex object graphs, 
query object attributes and invoke object methods.
-
-The syntax of a typical Geode OQL query is:
-
-``` pre
-[IMPORT package]
-SELECT [DISTINCT] projectionList
-FROM collection1, [collection2, …]
-[WHERE clause]
-[ORDER BY order_criteria [desc]]
-```
-
-Therefore, a simple Geode OQL query resembles the following:
-
-``` pre
-SELECT DISTINCT * FROM /exampleRegion WHERE status = ‘active’
-```
-
-An important characteristic of Geode querying to note is that by default, 
Geode queries on the values of a region and not on keys. To obtain keys from a 
region, you must use the keySet path expression on the queried region. For 
example, `/exampleRegion.keySet`.
-
-For those new to the Geode querying, see also the [Geode Querying FAQ and 
Examples](../../getting_started/querying_quick_reference.html#reference_D5CE64F5FD6F4A808AEFB748C867189E).
-
--   **[Advantages of 
OQL](../../developing/querying_basics/oql_compared_to_sql.html)**
-
--   **[Writing and Executing a Query in 
Geode](../../developing/querying_basics/running_a_query.html)**
-
--   **[Building a Query 
String](../../developing/querying_basics/what_is_a_query_string.html)**
-
--   **[OQL Syntax and 
Semantics](../../developing/query_additional/query_language_features.html)**
-
--   **[Query Language Restrictions and Unsupported 
Features](../../developing/querying_basics/restrictions_and_unsupported_features.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/query_grammar_and_reserved_words.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/querying_basics/query_grammar_and_reserved_words.html.md.erb
 
b/geode-docs/developing/querying_basics/query_grammar_and_reserved_words.html.md.erb
deleted file mode 100644
index bf5b564..0000000
--- 
a/geode-docs/developing/querying_basics/query_grammar_and_reserved_words.html.md.erb
+++ /dev/null
@@ -1,146 +0,0 @@
----
-title:  Query Language Grammar
----
-
-## <a 
id="query_grammar_and_reserved_words__section_F6DF7EBA0201463F9F19645849748D54" 
class="no-quick-link"></a>Language Grammar
-
-Notation used in the grammar:
-n   
-A nonterminal symbol that has to appear at some place within the grammar on 
the left side of a rule. All nonterminal symbols have to be derived to be 
terminal symbols.
-
- ***t***   
-A terminal symbol (shown in italic bold).
-
-x y   
-x followed by y
-
-x | y   
-x or y
-
-(x | y)   
-x or y
-
-\[ x \]   
-x or empty
-
-{ x }   
-A possibly empty sequence of x.
-
- *comment*   
-descriptive text
-
-Grammar list:
-
-``` pre
-symbol ::= expression
-query_program ::= [ imports semicolon ] query [semicolon]
-imports ::= import { semicolon import }
-import ::= IMPORT qualifiedName [ AS identifier ]
-query ::= selectExpr | expr
-selectExpr ::= SELECT DISTINCT projectionAttributes fromClause [ whereClause ]
-projectionAttributes ::= * | projectionList
-projectionList ::= projection { comma projection }
-projection ::= field | expr [ AS identifier ]
-field ::= identifier colon expr
-fromClause ::= FROM iteratorDef { comma iteratorDef }
-iteratorDef ::= expr [ [ AS ] identifier ] [ TYPE identifier ] | identifier IN 
expr [ TYPE identifier ]
-whereClause ::= WHERE expr
-expr ::= castExpr
-castExpr ::= orExpr | left_paren identifier right_paren castExpr
-orExpr ::= andExpr { OR andExpr }
-andExpr ::= equalityExpr { AND equalityExpr }
-equalityExpr ::= relationalExpr { ( = | <> | != ) relationalExpr }
-relationalExpr ::= inExpr { ( < | <= | > | >= ) inExpr }
-inExpr ::= unaryExpr { IN unaryExpr }
-unaryExpr ::= [ NOT ] unaryExpr
-postfixExpr ::= primaryExpr { left_bracket expr right_bracket }
-        | primaryExpr { dot identifier [ argList ] }
-argList ::= left_paren [ valueList ] right_paren
-qualifiedName ::= identifier { dot identifier }
-primaryExpr ::= functionExpr
-        | identifier [ argList ]
-        | undefinedExpr
-        | collectionConstruction
-        | queryParam
-        | literal
-        | ( query )
-        | region_path
-functionExpr ::= ELEMENT left_paren query right_paren
-        | NVL left_paren query comma query right_paren
-        | TO_DATE left_paren query right_paren
-undefinedExpr ::= IS_UNDEFINED left_paren query right_paren
-        | IS_DEFINED left_paren query right_paren
-collectionConstruction ::= SET left_paren [ valueList ] right_paren
-valueList ::= expr { comma expr }
-queryParam ::= $ integerLiteral
-region_path ::= forward_slash region_name { forward_slash region_name }
-region_name ::= name_character { name_character }
-identifier ::= letter { name_character }
-literal ::= booleanLiteral
-        | integerLiteral
-        | longLiteral
-        | doubleLiteral
-        | floatLiteral
-        | charLiteral
-        | stringLiteral
-        | dateLiteral
-        | timeLiteral
-        | timestampLiteral
-        | NULL
-        | UNDEFINED
-booleanLiteral ::= TRUE | FALSE
-integerLiteral ::= [ dash ] digit { digit }
-longLiteral ::= integerLiteral L
-floatLiteral ::= [ dash ] digit { digit } dot digit { digit } [ ( E | e ) [ 
plus | dash ] digit { digit } ] F
-doubleLiteral ::= [ dash ] digit { digit } dot digit { digit } [ ( E | e ) [ 
plus | dash ] digit { digit } ] [ D ]
-charLiteral ::= CHAR single_quote character single_quote
-stringLiteral ::= single_quote { character } single_quote
-dateLiteral ::= DATE single_quote integerLiteral dash integerLiteral dash 
integerLiteral single_quote
-timeLiteral ::= TIME single_quote integerLiteral colon
-        integerLiteral colon integerLiteral single_quote
-timestampLiteral ::= TIMESTAMP single_quote
-        integerLiteral dash integerLiteral dash integerLiteral integerLiteral 
colon
-        integerLiteral colon
-        digit { digit } [ dot digit { digit } ] single_quote
-letter ::= any unicode letter
-character ::= any unicode character except 0xFFFF
-name_character ::= letter | digit | underscore
-digit ::= any unicode digit 
-```
-
-The expressions in the following are all terminal characters:
-
-``` pre
-dot ::= .
-left_paren ::= (
-right_paren ::= )
-left_bracket ::= [
-right_bracket ::= ]
-single_quote ::= ’
-underscore ::= _
-forward_slash ::= /
-comma ::= ,
-semicolon ::= ;
-colon ::= :
-dash ::= -
-plus ::= +
-            
-```
-
-## <a 
id="query_grammar_and_reserved_words__section_B074373F2ED44DC7B98652E70ABC5D5D" 
class="no-quick-link"></a>Language Notes
-
--   Query language keywords such as SELECT, NULL, and DATE are 
case-insensitive. Identifiers such as attribute names, method names, and path 
expressions are case-sensitive.
--   Comment lines begin with -- (double dash).
--   Comment blocks begin with /\* and end with \*/.
--   String literals are delimited by single-quotes. Embedded single-quotes are 
doubled.
-
-    Examples:
-
-    ``` pre
-    'Hello' value = Hello
-    'He said, ''Hello''' value = He said, 'Hello'
-    ```
-
--   Character literals begin with the CHAR keyword followed by the character 
in single quotation marks. The single-quotation mark character itself is 
represented as `CHAR ''''` (with four single quotation marks).
--   In the TIMESTAMP literal, there is a maximum of nine digits after the 
decimal point.
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/querying_partitioned_regions.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/querying_basics/querying_partitioned_regions.html.md.erb
 
b/geode-docs/developing/querying_basics/querying_partitioned_regions.html.md.erb
deleted file mode 100644
index 14e7f09..0000000
--- 
a/geode-docs/developing/querying_basics/querying_partitioned_regions.html.md.erb
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title:  Querying Partitioned Regions
----
-
-Geode allows you to manage and store large amounts of data across distributed 
nodes using partitioned regions. The basic unit of storage for a partitioned 
region is a bucket, which resides on a Geode node and contains all the entries 
that map to a single hashcode. In a typical partitioned region query, the 
system distributes the query to all buckets across all nodes, then merges the 
result sets and sends back the query results.
-
-<a 
id="querying_partitioned_regions__section_4C603563DEDC4303818FB8F894470457"></a>
-The following list summarizes the querying functionality supported by Geode 
for partitioned regions:
-
--   **Ability to target specific nodes in a query**. If you know that a 
specific bucket contains the data that you want to query, you can use a 
function to ensure that your query only runs the specific node that holds the 
data. This can greatly improve query efficiency. The ability to query data on a 
specific node is only available if you are using functions and if the function 
is executed on one single region. In order to do this, you need to use 
`Query.execute(RegionFunctionContext context)`. See the [Java 
API](/releases/latest/javadoc/org/apache/geode/cache/query/Query.html) and 
[Querying a Partitioned Region on a Single 
Node](../query_additional/query_on_a_single_node.html#concept_30B18A6507534993BD55C2C9E0544A97)
 for more details.
--   **Ability to optimize partitioned region query performance using key 
indexes**. You can improve query performance on data that is partitioned by key 
or a field value by creating a key index and then executing the query using use 
`Query.execute(RegionFunctionContext                         context)` with the 
key or field value used as filter. See the [Java 
API](/releases/latest/javadoc/org/apache/geode/cache/query/Query.html) and 
[Optimizing Queries on Data Partitioned by a Key or Field 
Value](../query_additional/partitioned_region_key_or_field_value.html#concept_3010014DFBC9479783B2B45982014454)
 for more details.
--   **Ability to perform equi-join queries between partitioned regions and 
between partitioned regions and replicated regions**. Join queries between 
partitioned region and between partitioned regions and replicated regions are 
supported through the function service. In order to perform equi-join 
operations on partitioned regions or partitioned regions and replicated 
regions, the partitioned regions must be colocated, and you need to use the 
need to use `Query.execute(RegionFunctionContext                         
context)`. See the [Java 
API](/releases/latest/javadoc/org/apache/geode/cache/query/Query.html) and 
[Performing an Equi-Join Query on Partitioned 
Regions](../partitioned_regions/join_query_partitioned_regions.html#concept_B930D276F49541F282A2CFE639F107DD)
 for more details.
-
--   **[Using ORDER BY on Partitioned 
Regions](../../developing/query_additional/order_by_on_partitioned_regions.html)**
-
--   **[Querying a Partitioned Region on a Single 
Node](../../developing/query_additional/query_on_a_single_node.html)**
-
--   **[Optimizing Queries on Data Partitioned by a Key or Field 
Value](../../developing/query_additional/partitioned_region_key_or_field_value.html)**
-
--   **[Performing an Equi-Join Query on Partitioned 
Regions](../../developing/partitioned_regions/join_query_partitioned_regions.html)**
-
--   **[Partitioned Region Query 
Restrictions](../../developing/query_additional/partitioned_region_query_restrictions.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/reserved_words.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/querying_basics/reserved_words.html.md.erb 
b/geode-docs/developing/querying_basics/reserved_words.html.md.erb
deleted file mode 100644
index 3da8b33..0000000
--- a/geode-docs/developing/querying_basics/reserved_words.html.md.erb
+++ /dev/null
@@ -1,112 +0,0 @@
----
-title:  Reserved Words
----
-
-## <a 
id="concept_4F288B1F9579422FA481FBE2C3ADD007__section_3415163C3EFB46A6BE873E2606C9DE0F"
 class="no-quick-link"></a>Reserved Words
-
-These words are reserved for the query language and may not be used as 
identifiers. The words with asterisk (`*`) after them are not currently used by 
Geode, but are reserved for future implementation.
-
-<table>
-<colgroup>
-<col width="25%" />
-<col width="25%" />
-<col width="25%" />
-<col width="25%" />
-</colgroup>
-<tbody>
-<tr class="odd">
-<td><pre class="pre codeblock"><code>abs*
-all
-and 
-andthen* 
-any* 
-array 
-as 
-asc 
-avg* 
-bag* 
-boolean 
-by 
-byte 
-char 
-collection
-count 
-date 
-declare* 
-define*
-desc </code></pre></td>
-<td><pre class="pre codeblock"><code>dictionary 
-distinct 
-double 
-element 
-enum* 
-except* 
-exists* 
-false 
-first* 
-flatten* 
-float 
-for* 
-from 
-group* 
-having* 
-import 
-in 
-int 
-intersect* 
-interval* </code></pre></td>
-<td><pre class="pre codeblock"><code>is_defined 
-is_undefined 
-last* 
-like
-limit
-list* 
-listtoset* 
-long 
-map 
-max* 
-min* 
-mod* 
-nil 
-not 
-null 
-nvl 
-octet 
-or 
-order </code></pre></td>
-<td><pre class="pre codeblock"><code>orelse* 
-query* 
-select 
-set 
-short 
-some* 
-string 
-struct* 
-sum* 
-time 
-timestamp 
-to_date 
-true 
-type 
-undefine* 
-undefined 
-union* 
-unique* 
-where</code></pre></td>
-</tr>
-</tbody>
-</table>
-
-To access any method, attribute, or named object that has the same name as a 
query language reserved word, enclose the name within double quotation marks.
-
-Examples:
-
-``` pre
-SELECT DISTINCT "type" FROM /portfolios WHERE status = 'active'
-```
-
-``` pre
-SELECT DISTINCT * FROM /region1 WHERE emps."select"() < 100000 
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/restrictions_and_unsupported_features.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/querying_basics/restrictions_and_unsupported_features.html.md.erb
 
b/geode-docs/developing/querying_basics/restrictions_and_unsupported_features.html.md.erb
deleted file mode 100644
index 44a5e73..0000000
--- 
a/geode-docs/developing/querying_basics/restrictions_and_unsupported_features.html.md.erb
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title:  Query Language Restrictions and Unsupported Features
----
-
-At a high level, Geode does not support the following querying features:
-
--   Indexes targeted for joins across more than one region are not supported
--   Static method invocations. For example, the following query is invalid:
-
-    ``` pre
-    SELECT DISTINCT * FROM /QueryRegion0 WHERE aDay = Day.Wednesday
-    ```
-
--   You cannot create an index on fields using Set/List types (Collection 
types) that are not comparable. The OQL index implementation expects fields to 
be Comparable. To workaround this, you can create a custom Collection type that 
implements Comparable.
--   ORDER BY is only supported with DISTINCT queries.
-
-In addition, there are some specific limitations on partitioned region 
querying. See [Partitioned Region Query 
Restrictions](../query_additional/partitioned_region_query_restrictions.html#concept_5353476380D44CC1A7F586E5AE1CE7E8).
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/running_a_query.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/querying_basics/running_a_query.html.md.erb 
b/geode-docs/developing/querying_basics/running_a_query.html.md.erb
deleted file mode 100644
index 83b9d1d..0000000
--- a/geode-docs/developing/querying_basics/running_a_query.html.md.erb
+++ /dev/null
@@ -1,70 +0,0 @@
----
-title:  Writing and Executing a Query in Geode
----
-
-<a id="running_a_querying__section_C285160AF91C4486A39444C3A22D6475"></a>
-The Geode QueryService provides methods to create the Query object. You can 
then use the Query object to perform query-related operations.
-
-The QueryService instance you should use depends on whether you are querying 
the local cache of an application or if you want your application to query the 
server cache.
-
-## <a id="running_a_querying__section_8B9C3F5BFBA6421A81EEB404DBE512C2" 
class="no-quick-link"></a>Querying a Local Cache
-
-To query the application's local cache or to query other members, use 
`org.apache.geode.cache.Cache.getQueryService`.
-
-**Sample Code**
-
-``` pre
- // Identify your query string.
- String queryString = "SELECT DISTINCT * FROM /exampleRegion";
- 
- // Get QueryService from Cache.
- QueryService queryService = cache.getQueryService();
- 
- // Create the Query Object.
- Query query = queryService.newQuery(queryString);
- 
- // Execute Query locally. Returns results set.
- SelectResults results = (SelectResults)query.execute();
- 
- // Find the Size of the ResultSet.
- int size = results.size();
- 
- // Iterate through your ResultSet.
- Portfolio p = (Portfolio)results.iterator().next(); /* Region containing 
Portfolio object. */
-```
-
-## <a id="running_a_querying__section_BAD35A249F784095857CC6848F91F6A4" 
class="no-quick-link"></a>Querying a Server Cache from a Client
-
-To perform a client to server query, use 
`org.apache.geode.cache.client.Pool.getQueryService`.
-
-**Sample Code**
-
-``` pre
-// Identify your query string.
- String queryString = "SELECT DISTINCT * FROM /exampleRegion";
- 
- // Get QueryService from client pool.
- QueryService queryService = pool.getQueryService();
- 
- // Create the Query Object.
- Query query = queryService.newQuery(queryString);
- 
- // Execute Query locally. Returns results set.
- SelectResults results = (SelectResults)query.execute();
- 
- // Find the Size of the ResultSet.
- int size = results.size();
- 
- // Iterate through your ResultSet.
- Portfolio p = (Portfolio)results.iterator().next(); /* Region containing 
Portfolio object. */
-```
-
-Refer to the following JavaDocs for specific APIs:
-
--   [Query 
package](/releases/latest/javadoc/org/apache/geode/cache/query/package-summary.html)
--   
[QueryService](/releases/latest/javadoc/org/apache/geode/cache/query/QueryService.html)
-
-**Note:**
-You can also perform queries using the gfsh `query` command. See 
[query](../../tools_modules/gfsh/command-pages/query.html).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/supported_character_sets.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/querying_basics/supported_character_sets.html.md.erb 
b/geode-docs/developing/querying_basics/supported_character_sets.html.md.erb
deleted file mode 100644
index 1383ee9..0000000
--- a/geode-docs/developing/querying_basics/supported_character_sets.html.md.erb
+++ /dev/null
@@ -1,7 +0,0 @@
----
-title:  Supported Character Sets
----
-
-Geode query language supports the full ASCII and Unicode character sets.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/querying_basics/what_is_a_query_string.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/querying_basics/what_is_a_query_string.html.md.erb 
b/geode-docs/developing/querying_basics/what_is_a_query_string.html.md.erb
deleted file mode 100644
index c2999bb..0000000
--- a/geode-docs/developing/querying_basics/what_is_a_query_string.html.md.erb
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title:  Building a Query String
----
-
-<a id="what_is_a_query_string__section_1866AE6026DE4D66A2CD2363C1BC0406"></a>
-A query string is a fully formed OQL statement that can be passed to a query 
engine and executed against a data set. To build a query string, you combine 
supported keywords, expressions, and operators to create an expression that 
returns the information you require.
-
-A query string follows the rules specified by the query language and grammar. 
It can include:
-
--   **Namescopes**. For example, the IMPORT statement. See [IMPORT 
Statement](../query_select/the_import_statement.html#concept_2E9F15B2FE9041238B54736103396BF7).
--   **Path expressions**. For example, in the query `SELECT * FROM             
                    /exampleRegion`,` /exampleRegion` is a path expression. See 
[FROM Clause](../query_select/the_from_clause.html#the_from_clause).
--   **Attribute names**. For example, in the query `SELECT DISTINCT * FROM 
/exampleRegion p WHERE                             p.position1.secId = '1'`, we 
access the `secId` attribute of the Position object. See [WHERE 
Clause](../query_select/the_where_clause.html#the_where_clause).
--   **Method invocations**. For example, in the query `SELECT DISTINCT * FROM 
/exampleRegion p WHERE                             p.name.startsWith('Bo')`, we 
invoke the `startsWith` method on the Name object. See [WHERE 
Clause](../query_select/the_where_clause.html#the_where_clause).
--   **Operators**. For example, comparison operators (=,&lt;,&gt;,&lt;&gt;), 
unary operators (NOT), logical operators (AND, OR) and so on. See 
[Operators](../query_additional/operators.html#operators) for a complete list.
--   **Literals**. For example, boolean, date, time and so on. See [Supported 
Literals](../query_additional/literals.html#literals) for a complete list.
--   **Query bind parameters**. For example, in the query `SELECT DISTINCT * 
FROM $1 p WHERE p.status =                             $2`, $1 and $2 are 
parameters that can be passed to the query during runtime. See [Using Query 
Bind 
Parameters](../query_additional/using_query_bind_parameters.html#concept_173E775FE46B47DF9D7D1E40680D34DF)
 for more details.
--   **Preset query functions**. For example, ELEMENT(expr) and 
IS\_DEFINED(expr). See [SELECT 
Statement](../query_select/the_select_statement.html#concept_85AE7D6B1E2941ED8BD2A8310A81753E)
 for other available functions.
--   **SELECT statements**. For example, in the example queries above `SELECT 
*` or `SELECT DISTINCT *`. See [SELECT 
Statement](../query_select/the_select_statement.html#concept_85AE7D6B1E2941ED8BD2A8310A81753E)
 for other available functions.
--   **Comments**. OQL permits extra characters to accompany the query string 
without changing the string's definition. Form a multi-line comment by 
enclosing the comment body within `/*` and `*/` delimiters; OQL does not permit 
nested comments. A single line comment body is all the characters to the right 
of `--` (two hyphens) up to the end of the line.
-
-The components listed above can all be part of the query string, but none of 
the components are required. At a minimum, a query string contains an 
expression that can be evaluated against specified data.
-
-The following sections provide guidelines for the query language building 
blocks that are used when writing typical Geode queries.
-
--   **[IMPORT 
Statement](../../developing/query_select/the_import_statement.html)**
-
--   **[FROM Clause](../../developing/query_select/the_from_clause.html)**
-
--   **[WHERE Clause](../../developing/query_select/the_where_clause.html)**
-
--   **[SELECT 
Statement](../../developing/query_select/the_select_statement.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/region_options/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/region_options/chapter_overview.html.md.erb 
b/geode-docs/developing/region_options/chapter_overview.html.md.erb
deleted file mode 100644
index 53ad2fb..0000000
--- a/geode-docs/developing/region_options/chapter_overview.html.md.erb
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title:  Region Data Storage and Distribution
----
-
-The Apache Geode data storage and distribution models put your data in the 
right place at the right time. You should understand all the options for data 
storage in Geode before you configure your data regions.
-
--   **[Storage and Distribution 
Options](../../developing/region_options/storage_distribution_options.html)**
-
-    Geode provides several models for data storage and distribution, including 
partitioned or replicated regions as well as distributed or non-distributed 
regions (local cache storage).
-
--   **[Region Types](../../developing/region_options/region_types.html)**
-
-    Region types define region behavior within a single distributed system. 
You have various options for region data storage and distribution.
-
--   **[Region Data Stores and Data 
Accessors](../../developing/region_options/data_hosts_and_accessors.html)**
-
-    Understand the difference between members that store data for a region and 
members that act only as data accessors to the region.
-
--   **[Creating Regions 
Dynamically](../../developing/region_options/dynamic_region_creation.html)**
-
-    You can dynamically create regions in your application code and 
automatically instantiate them on members of a distributed system.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/region_options/data_hosts_and_accessors.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/region_options/data_hosts_and_accessors.html.md.erb 
b/geode-docs/developing/region_options/data_hosts_and_accessors.html.md.erb
deleted file mode 100644
index ed167b6..0000000
--- a/geode-docs/developing/region_options/data_hosts_and_accessors.html.md.erb
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title:  Region Data Stores and Data Accessors
----
-
-Understand the difference between members that store data for a region and 
members that act only as data accessors to the region.
-
-<a id="data_hosts_and_accessors__section_0EF33633F97B4C63AC34F523259AD310"></a>
-In most cases, when you define a data region in a member’s cache, you also 
specify whether the member is a data store. Members that store data for a 
region are referred to as data stores or data hosts. Members that do not store 
data are referred to as accessor members, or empty members. Any member, store 
or accessor, that defines a region can access it, put data into it, and receive 
events from other members. To configure a region so the member is a data 
accessor, you use configurations that specify no local data storage for the 
region. Otherwise, the member is a data store for the region.
-
-For server regions, suppress local data storage at region creation by 
specifying a region shortcut that contains the term
-"PROXY" in its name, such as `PARTITION_PROXY` or `REPLICATE_PROXY`.
-
-For client regions, suppress local data storage at region creation by 
specifying the `PROXY` region
-shortcut. Do not use the `CACHING_PROXY` shortcut for this purpose, as it 
allows local data storage.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/region_options/dynamic_region_creation.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/region_options/dynamic_region_creation.html.md.erb 
b/geode-docs/developing/region_options/dynamic_region_creation.html.md.erb
deleted file mode 100644
index 2974f22..0000000
--- a/geode-docs/developing/region_options/dynamic_region_creation.html.md.erb
+++ /dev/null
@@ -1,180 +0,0 @@
----
-title:  Creating Regions Dynamically
----
-
-You can dynamically create regions in your application code and automatically 
instantiate them on members of a distributed system.
-
-If your application does not require partitioned regions, you can use the 
<span class="keyword 
apiname">org.apache.geode.cache.DynamicRegionFactory</span> class to 
dynamically create regions, or you can create them using the 
`<dynamic-region-factory>` element in the cache.xml file that defines the 
region. See 
[&lt;dynamic-region-factory&gt;](../../reference/topics/cache_xml.html#dynamic-region-factory).
-
-Due to the number of options involved, most developers use functions to create 
regions dynamically in their applications, as described in this topic. Dynamic 
regions can also be created from the `gfsh` command line.
-
-For a complete discussion of using Geode functions, see [Function 
Execution](../function_exec/chapter_overview.html). Functions use the <span 
class="keyword apiname">org.apache.geode.cache.execute.FunctionService</span> 
class.
-
-For example, the following Java classes define and use a function for dynamic 
region creation:
-
-The <span class="keyword apiname">CreateRegionFunction</span> class defines a 
function invoked on a server by a client using the <span class="keyword 
apiname">onServer()</span> method of the <span class="keyword 
apiname">FunctionService</span> class. This function call initiates region 
creation by putting an entry into the region attributes metadata region. The 
entry key is the region name and the value is the set of region attributes used 
to create the region.
-
-``` pre
-#CreateRegionFunction.java
-
-import org.apache.geode.cache.Cache;
-import org.apache.geode.cache.CacheFactory;
-import org.apache.geode.cache.DataPolicy;
-import org.apache.geode.cache.Declarable;
-import org.apache.geode.cache.Region;
-import org.apache.geode.cache.RegionAttributes;
-import org.apache.geode.cache.RegionFactory;
-import org.apache.geode.cache.Scope;
-
-import org.apache.geode.cache.execute.Function;
-import org.apache.geode.cache.execute.FunctionContext;
-
-import java.util.Properties;
-
-public class CreateRegionFunction implements Function, Declarable {
-
-  private final Cache cache;
-  
-  private final Region<String,RegionAttributes> regionAttributesMetadataRegion;
-
-  private static final String REGION_ATTRIBUTES_METADATA_REGION = 
-                                     "_regionAttributesMetadata";
-  
-  public enum Status {SUCCESSFUL, UNSUCCESSFUL, ALREADY_EXISTS};
-
-  public CreateRegionFunction() {
-    this.cache = CacheFactory.getAnyInstance();
-    this.regionAttributesMetadataRegion = 
createRegionAttributesMetadataRegion();
-  }
-
-  public void execute(FunctionContext context) {
-    Object[] arguments = (Object[]) context.getArguments();
-    String regionName = (String) arguments[0];
-    RegionAttributes attributes = (RegionAttributes) arguments[1];
-
-    // Create or retrieve region
-    Status status = createOrRetrieveRegion(regionName, attributes);
-
-    // Return status
-    context.getResultSender().lastResult(status);
-  }
-  
-  private Status createOrRetrieveRegion(String regionName, 
-                                        RegionAttributes attributes) {
-    Status status = Status.SUCCESSFUL;
-    Region region = this.cache.getRegion(regionName);
-    if (region == null) {
-      // Put the attributes into the metadata region. The afterCreate call will
-      // actually create the region.
-      this.regionAttributesMetadataRegion.put(regionName, attributes);
-      
-      // Retrieve the region after creating it
-      region = this.cache.getRegion(regionName);
-      if (region == null) {
-        status = Status.UNSUCCESSFUL;
-      }
-    } else {
-      status = Status.ALREADY_EXISTS;
-    }
-    return status;
-  }
-  
-  private Region<String,RegionAttributes> 
-  createRegionAttributesMetadataRegion() {
-    Region<String, RegionAttributes> metaRegion = 
-                         
this.cache.getRegion(REGION_ATTRIBUTES_METADATA_REGION);
-    if (metaRegion == null) {
-      RegionFactory<String, RegionAttributes> factory =
-                              this.cache.createRegionFactory();
-      factory.setDataPolicy(DataPolicy.REPLICATE);
-      factory.setScope(Scope.DISTRIBUTED_ACK);
-      factory.addCacheListener(new CreateRegionCacheListener());
-      metaRegion = factory.create(REGION_ATTRIBUTES_METADATA_REGION);
-    }
-    return metaRegion;
-  }
-
-  public String getId() {
-    return getClass().getSimpleName();
-  }
-
-  public boolean optimizeForWrite() {
-    return false;
-  }
-
-  public boolean isHA() {
-    return true;
-  }
-
-  public boolean hasResult() {
-    return true;
-  }
-
-  public void init(Properties properties) {
-  }
-}
-```
-
-The <span class="keyword apiname">CreateRegionCacheListener</span> class is a 
cache listener that implements two methods, <span class="keyword 
apiname">afterCreate()</span> and <span class="keyword 
apiname">afterRegionCreate()</span>. The <span class="keyword 
apiname">afterCreate()</span> method creates the region. The <span 
class="keyword apiname">afterRegionCreate()</span> method causes each new 
server to create all the regions defined in the metadata region.
-
-``` pre
-#CreateRegionCacheListener.java
-
-import org.apache.geode.cache.Cache;
-import org.apache.geode.cache.CacheFactory;
-import org.apache.geode.cache.Declarable;
-import org.apache.geode.cache.EntryEvent;
-import org.apache.geode.cache.Region;
-import org.apache.geode.cache.RegionAttributes;
-import org.apache.geode.cache.RegionEvent;
-import org.apache.geode.cache.RegionExistsException;
-
-import org.apache.geode.cache.util.CacheListenerAdapter;
-
-import java.util.Map;
-import java.util.Properties;
-
-public class CreateRegionCacheListener 
-             extends CacheListenerAdapter<String,RegionAttributes>
-             implements Declarable {
-
-  private Cache cache;
-  
-  public CreateRegionCacheListener() {
-    this.cache = CacheFactory.getAnyInstance();
-  }
-
-  public void afterCreate(EntryEvent<String,RegionAttributes> event) {
-    createRegion(event.getKey(), event.getNewValue());
-  }
-  
-  public void afterRegionCreate(RegionEvent<String,RegionAttributes> event) {
-    Region<String,RegionAttributes> region = event.getRegion();
-    for (Map.Entry<String,RegionAttributes> entry : region.entrySet()) {
-      createRegion(entry.getKey(), entry.getValue());
-    }
-  }
-  
-  private void createRegion(String regionName, RegionAttributes attributes) {
-    if (this.cache.getLogger().fineEnabled()) {
-      this.cache.getLogger().fine(
-                             "CreateRegionCacheListener creating region named: 
"
-                             + regionName + " with attributes: " + attributes);
-    }
-    try {
-      Region region = this.cache.createRegionFactory(attributes)
-        .create(regionName);
-      if (this.cache.getLogger().fineEnabled()) {
-        this.cache.getLogger().fine("CreateRegionCacheListener created: "
-                               + region);
-      }
-      System.out.println("CreateRegionCacheListener created: " + region);
-    } catch (RegionExistsException e) {/* ignore */}
-  }
-
-  public void init(Properties p) {
-  }
-}
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/region_options/region_types.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/region_options/region_types.html.md.erb 
b/geode-docs/developing/region_options/region_types.html.md.erb
deleted file mode 100644
index 45908dd..0000000
--- a/geode-docs/developing/region_options/region_types.html.md.erb
+++ /dev/null
@@ -1,129 +0,0 @@
----
-title:  Region Types
----
-
-Region types define region behavior within a single distributed system. You 
have various options for region data storage and distribution.
-
-<a id="region_types__section_E3435ED1D0D142538B99FA69A9E449EF"></a>
-Within a Geode distributed system, you can define distributed regions and 
non-distributed regions, and you can define regions whose data is spread across 
the distributed system, and regions whose data is entirely contained in a 
single member.
-
-Your choice of region type is governed in part by the type of application you 
are running. In particular, you need to use specific region types for your 
servers and clients for effective communication between the two tiers:
-
--   Server regions are created inside a `Cache` by servers and are accessed by 
clients that connect to the servers from outside the server's distributed 
system. Server regions must have region type partitioned or replicated. Server 
region configuration uses the `RegionShortcut` enum settings.
--   Client regions are created inside a `ClientCache` by clients and are 
configured to distribute data and events between the client and the server 
tier. Client regions must have region type `local`. Client region configuration 
uses the `ClientRegionShortcut` enum settings.
--   Peer regions are created inside a `Cache`. Peer regions may be server 
regions, or they may be regions that are not accessed by clients. Peer regions 
can have any region type. Peer region configuration uses the `RegionShortcut` 
enum settings.
-
-When you configure a server or peer region using `gfsh` or with the 
`cache.xml` file, you can use *region shortcuts* to define the basic 
configuration of your region. A region shortcut provides a set of default 
configuration attributes that are designed for various types of caching 
architectures. You can then add additional configuration attributes as needed 
to customize your application. For more information and a complete reference of 
these region shortcuts, see [Region Shortcuts 
Reference](../../reference/topics/region_shortcuts_reference.html#reference_lt4_54c_lk).
-
-<a id="region_types__section_A3449B07598C47A881D9219574DE46C5"></a>
-
-These are the primary configuration choices for each data region.
-
-<table>
-<colgroup>
-<col width="33%" />
-<col width="34%" />
-<col width="33%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Region Type</th>
-<th>Description</th>
-<th>Best suited for...</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td>Partitioned</td>
-<td>System-wide setting for the data set. Data is divided into buckets across 
the members that define the region. For high availability, configure redundant 
copies so each bucket is stored in multiple members with one member holding the 
primary.</td>
-<td>Server regions and peer regions
-<ul>
-<li>Very large data sets</li>
-<li>High availability</li>
-<li>Write performance</li>
-<li>Partitioned event listeners and data loaders</li>
-</ul></td>
-</tr>
-<tr class="even">
-<td>Replicated (distributed)</td>
-<td>Holds all data from the distributed region. The data from the distributed 
region is copied into the member replica region. Can be mixed with 
non-replication, with some members holding replicas and some holding 
non-replicas.</td>
-<td>Server regions and peer regions
-<ul>
-<li>Read heavy, small datasets</li>
-<li>Asynchronous distribution</li>
-<li>Query performance</li>
-</ul></td>
-</tr>
-<tr class="odd">
-<td>Distributed non-replicated</td>
-<td>Data is spread across the members that define the region. Each member 
holds only the data it has expressed interest in. Can be mixed with 
replication, with some members holding replicas and some holding 
non-replicas.</td>
-<td>Peer regions, but not server regions and not client regions
-<ul>
-<li>Asynchronous distribution</li>
-<li>Query performance</li>
-</ul></td>
-</tr>
-<tr class="even">
-<td>Non-distributed (local)</td>
-<td>The region is visible only to the defining member.</td>
-<td>Client regions and peer regions
-<ul>
-<li>Data that is not shared between applications</li>
-</ul></td>
-</tr>
-</tbody>
-</table>
-
-## <a id="region_types__section_C92C7DBD8EF44F1789FCB36281D3F8BF" 
class="no-quick-link"></a>Partitioned Regions
-
-Partitioning is a good choice for very large server regions. Partitioned 
regions are ideal for data sets in the hundreds of gigabytes and beyond.
-
-**Note:**
-Partitioned regions generally require more JDBC connections than other region 
types because each member that hosts data must have a connection.
-
-Partitioned regions group your data into buckets, each of which is stored on a 
subset of all of the system members. Data location in the buckets does not 
affect the logical view - all members see the same logical data set.
-
-Use partitioning for:
-
--   **Large data sets**. Store data sets that are too large to fit into a 
single member, and all members will see the same logical data set. Partitioned 
regions divide the data into units of storage called buckets that are split 
across the members hosting the partitioned region data, so no member needs to 
host all of the region’s data. Geode provides dynamic redundancy recovery and 
rebalancing of partitioned regions, making them the choice for large-scale data 
containers. More members in the system can accommodate more uniform balancing 
of the data across all host members, allowing system throughput (both gets and 
puts) to scale as new members are added.
--   **High availability**. Partitioned regions allow you configure the number 
of copies of your data that Geode should make. If a member fails, your data 
will be available without interruption from the remaining members. Partitioned 
regions can also be persisted to disk for additional high availability.
--   **Scalability**. Partitioned regions can scale to large amounts of data 
because the data is divided between the members available to host the region. 
Increase your data capacity dynamically by simply adding new members. 
Partitioned regions also allow you to scale your processing capacity. Because 
your entries are spread out across the members hosting the region, reads and 
writes to those entries are also spread out across those members.
--   **Good write performance**. You can configure the number of copies of your 
data. The amount of data transmitted per write does not increase with the 
number of members. By contrast, with replicated regions, each write must be 
sent to every member that has the region replicated, so the amount of data 
transmitted per write increases with the number of members.
-
-In partitioned regions, you can colocate keys within buckets and across 
multiple partitioned regions. You can also control which members store which 
data buckets.
-
-## <a id="region_types__section_iwt_dnj_bm" 
class="no-quick-link"></a>Replicated Regions
-
-
-Replicated regions provide the highest performance in terms of throughput and 
latency.
-Replication is a good choice for small to medium size server regions.
-
-Use replicated regions for:
-
--   **Small amounts of data required by all members of the distributed 
system**. For example, currency rate information and mortgage rates.
--   **Data sets that can be contained entirely in a single member**. Each 
replicated region holds the complete data set for the region
--   **High performance data access**. Replication guarantees local access from 
the heap for application threads, providing the lowest possible latency for 
data access.
--   **Asynchronous distribution**. All distributed regions, replicated and 
non-replicated, provide the fastest distribution speeds.
-
-## <a id="region_types__section_2232BEC969F74CDB91B1BB74FEF67EE1" 
class="no-quick-link"></a>Distributed, Non-Replicated Regions
-
-Distributed regions provide the same performance as replicated regions, but 
each member stores only  data in which it has expressed an interest, either by 
subscribing to events from other members or by defining the data entries in its 
cache.
-
-Use distributed, non-replicated regions for:
-
--   **Peer regions, but not server regions or client regions**. Server regions 
must be either replicated or partitioned. Client regions must be local.
--   **Data sets where individual members need only notification and updates 
for changes to a subset of the data**. In non-replicated regions, each member 
receives only update events for the data entries it has defined in the local 
cache.
--   **Asynchronous distribution**. All distributed regions, replicated and 
non-replicated, provide the fastest distribution speeds.
-
-## <a id="region_types__section_A8150BDBC74E4019B1942481877A4370" 
class="no-quick-link"></a>Local Regions
-
-**Note:**
-When created using the `ClientRegionShortcut` settings, client regions are 
automatically defined as local, since all client distribution activities go to 
and come from the server tier.
-
-The local region has no peer-to-peer distribution activity.
-
-Use local regions for:
-
--   **Client regions**. Distribution is only between the client and server 
tier.
--   **Private data sets for the defining member**. The local region is not 
visible to peer members.
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/region_options/storage_distribution_options.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/region_options/storage_distribution_options.html.md.erb 
b/geode-docs/developing/region_options/storage_distribution_options.html.md.erb
deleted file mode 100644
index 7ed2732..0000000
--- 
a/geode-docs/developing/region_options/storage_distribution_options.html.md.erb
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title:  Storage and Distribution Options
----
-
-Geode provides several models for data storage and distribution, including 
partitioned or replicated regions as well as distributed or non-distributed 
regions (local cache storage).
-
-## <a 
id="concept_B18B7754E7C7485BA6D66F2DDB7A11FB__section_787D674A64244871AE49CBB58475088E"
 class="no-quick-link"></a>Peer-to-Peer Region Storage and Distribution
-
-At its most general, data management means having current data available when 
and where your applications need it. In a properly configured Geode 
installation, you store your data in your local members and Geode automatically 
distributes it to the other members that need it according to your cache 
configuration settings. You may be storing very large data objects that require 
special consideration, or you may have a high volume of data requiring careful 
configuration to safeguard your application's performance or memory use. You 
may need to be able to explicitly lock some data during particular operations. 
Most data management features are available as configuration options, which you 
can specify either using the `gfsh` cluster configuration service, `cache.xml` 
file or the API. Once configured, Geode manages the data automatically. For 
example, this is how you manage data distribution, disk storage, data 
expiration activities, and data partitioning. A few features are managed at ru
 n-time through the API.
-
-At the architectural level, data distribution runs between peers in a single 
system and between clients and servers.
-
--   Peer-to-peer provides the core distribution and storage models, which are 
specified as attributes on the data regions.
-
--   For client/server, you choose which data regions to share between the 
client and server tiers. Then, within each region, you can fine-tune the data 
that the server automatically sends to the client by subscribing to subsets.
-
-Data storage in any type of installation is based on the peer-to-peer 
configuration for each individual distributed system. Data and event 
distribution is based on a combination of the peer-to-peer and system-to-system 
configurations.
-
-Storage and distribution models are configured through cache and region 
attributes. The main choices are partitioned, replicated, or just distributed. 
All server regions must be partitioned or replicated. Each region’s 
`data-policy` and `subscription-attributes`, and its `scope` if it is not a 
partitioned region, interact for finer control of data distribution.
-
-## <a 
id="concept_B18B7754E7C7485BA6D66F2DDB7A11FB__section_A364D16DFADA49D1A838A7EAF8E4251C"
 class="no-quick-link"></a>Storing Data in the Local Cache
-
-To store data in your local cache, use a region `refid` with a 
`RegionShortcut` or `ClientRegionShortcut` that has local state. These 
automatically set the region `data-policy` to a non-empty policy. Regions 
without storage can send and receive event distributions without storing 
anything in your application heap. With the other settings, all entry 
operations received are stored locally.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb 
b/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb
deleted file mode 100644
index 96c6a3d..0000000
--- a/geode-docs/developing/storing_data_on_disk/chapter_overview.html.md.erb
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title:  Persistence and Overflow
----
-
-You can persist data on disk for backup purposes and overflow it to disk to 
free up memory without completely removing the data from your cache.
-
-**Note:**
-This supplements the general steps for managing data regions provided in 
[Basic Configuration and Programming](../../basic_config/book_intro.html).
-
-All disk storage uses Apache Geode[Disk 
Storage](../../managing/disk_storage/chapter_overview.html).
-
--   **[How Persistence and Overflow 
Work](../../developing/storing_data_on_disk/how_persist_overflow_work.html)**
-
-    To use Geode persistence and overflow, you should understand how they work 
with your data.
-
--   **[Configure Region Persistence and 
Overflow](../../developing/storing_data_on_disk/storing_data_on_disk.html)**
-
-    Plan persistence and overflow for your data regions and configure them 
accordingly.
-
--   **[Overflow Configuration 
Examples](../../developing/storing_data_on_disk/overflow_config_examples.html)**
-
-    The `cache.xml` examples show configuration of region and server 
subscription queue overflows.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb
 
b/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb
deleted file mode 100644
index 2c08c33..0000000
--- 
a/geode-docs/developing/storing_data_on_disk/how_persist_overflow_work.html.md.erb
+++ /dev/null
@@ -1,47 +0,0 @@
----
-title:  How Persistence and Overflow Work
----
-
-To use Geode persistence and overflow, you should understand how they work 
with your data.
-
-<a id="how_persist_overflow_work__section_jzl_wwb_pr"></a>
-Geode persists and overflows several types of data. You can persist or 
overflow the application data in your regions. In addition, Geode persists and 
overflows messaging queues between distributed systems, to manage memory 
consumption and provide high availability.
-
-Persistent data outlives the member where the region resides and can be used 
to initialize the region at creation. Overflow acts only as an extension of the 
region in memory.
-
-The data is written to disk according to the configuration of Geode disk 
stores. For any disk option, you can specify the name of the disk store to use 
or use the Geode default disk store. See [Disk 
Storage](../../managing/disk_storage/chapter_overview.html).
-
-## <a id="how_persist_overflow_work__section_78F2D1820B6C48859A0E5411CE360105" 
class="no-quick-link"></a>How Data Is Persisted and Overflowed
-
-For persistence, the entry keys and values are copied to disk. For overflow, 
only the entry values are copied. Other data, such as statistics and user 
attributes, are retained in memory only.
-
--   Data regions are overflowed to disk by least recently used (LRU) entries 
because those entries are deemed of least interest to the application and 
therefore less likely to be accessed.
--   Server subscription queues overflow most recently used (MRU) entries. 
These are the messages that are at the end of the queue and so are last in line 
to be sent to the client.
-
-## <a id="how_persist_overflow_work__section_1A3AE288145749058880D98C699FE124" 
class="no-quick-link"></a>Persistence
-
-Persistence provides a disk backup of region entry data. The keys and values 
of all entries are saved to disk, like having a replica of the region on disk. 
Region entry operations such as put and destroy are carried out in memory and 
on disk.
-
-<img src="../../images_svg/developing_persistence.svg" 
id="how_persist_overflow_work__image_B53E1A5A568D437692247A2FD99348A6" 
class="image" />
-
-When the member stops for any reason, the region data on disk remains. In 
partitioned regions, where data buckets are divided among members, this can 
result in some data only on disk and some on disk and in memory. The disk data 
can be used at member startup to populate the same region.
-
-## <a id="how_persist_overflow_work__section_55A7BBEB48574F649C40EB5D3E9CD0AC" 
class="no-quick-link"></a>Overflow
-
-Overflow limits region size in memory by moving the values of least recently 
used (LRU) entries to disk. Overflow basically uses disk as a swap space for 
entry values. If an entry is requested whose value is only on disk, the value 
is copied back up into memory, possibly causing the value of a different LRU 
entry to be moved to disk. As with persisted entries, overflowed entries are 
maintained on disk just as they are in memory.
-
-In this figure, the value of entry X has been moved to disk to make space in 
memory. The key for X remains in memory. From the distributed system 
perspective, the value on disk is as much a part of the region as the data in 
memory.
-
-<img src="../../images_svg/developing_overflow.svg" 
id="how_persist_overflow_work__image_1F89C9FBACB54EDA844778EC60F61B8D" 
class="image" />
-
-## <a id="how_persist_overflow_work__section_9CBEBC0B59554DB49CE4941435793C51" 
class="no-quick-link"></a>Persistence and Overflow Together
-
-Used together, persistence and overflow keep all entry keys and values on disk 
and only the most active entry values in memory. The removal of an entry value 
from memory due to overflow has no effect on the disk copy as all entries are 
already on disk.
-
-<img src="../../images_svg/developing_persistence_and_overflow.svg" 
id="how_persist_overflow_work__image_E40D9C2EA238406A991E954477C7EB78" 
class="image" />
-
-## Persistence and Multi-Site Configurations
-
-Multi-site gateway sender queues overflow most recently used (MRU) entries. 
These are the messages that are at the end of the queue and so are last in line 
to be sent to the remote site. You can also configure gateway sender queues to 
persist for high availability.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/storing_data_on_disk/overflow_config_examples.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/storing_data_on_disk/overflow_config_examples.html.md.erb
 
b/geode-docs/developing/storing_data_on_disk/overflow_config_examples.html.md.erb
deleted file mode 100644
index ca9d7cd..0000000
--- 
a/geode-docs/developing/storing_data_on_disk/overflow_config_examples.html.md.erb
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title:  Overflow Configuration Examples
----
-
-The `cache.xml` examples show configuration of region and server subscription 
queue overflows.
-
-<a id="overflow_config_examples__section_FD38DA72706245C996ACB7B23927F6AF"></a>
-Configure overflow criteria based on one of these factors:
-
--   Entry count
--   Absolute memory consumption
--   Memory consumption as a percentage of the application heap (not available 
for server subscription queues)
-
-Configuration of region overflow:
-
-``` pre
-<!-- Overflow when the region goes over 10000 entries -->
-<region-attributes>
-  <eviction-attributes>
-    <lru-entry-count maximum="10000" action="overflow-to-disk"/>
-  </eviction-attributes>
-</region-attributes>
-```
-
-Configuration of server's client subscription queue overflow:
-
-``` pre
-<!-- Overflow the server's subscription queues when the queues reach 1 Mb of 
memory -->
-<cache> 
-  <cache-server> 
-    <client-subscription eviction-policy="mem" capacity="1"/> 
-  </cache-server> 
-</cache>
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ff80a931/geode-docs/developing/storing_data_on_disk/storing_data_on_disk.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/storing_data_on_disk/storing_data_on_disk.html.md.erb 
b/geode-docs/developing/storing_data_on_disk/storing_data_on_disk.html.md.erb
deleted file mode 100644
index 9aefd7c..0000000
--- 
a/geode-docs/developing/storing_data_on_disk/storing_data_on_disk.html.md.erb
+++ /dev/null
@@ -1,62 +0,0 @@
----
-title:  Configure Region Persistence and Overflow
----
-
-Plan persistence and overflow for your data regions and configure them 
accordingly.
-
-<a id="storing_data_on_disk__section_E253562A46114CF0A4E47048D8143999"></a>
-Use the following steps to configure your data regions for persistence and 
overflow:
-
-1.  Configure your disk stores as needed. See [Designing and Configuring Disk 
Stores](../../managing/disk_storage/using_disk_stores.html#defining_disk_stores).
 The cache disk store defines where and how the data is written to disk.
-
-    ``` pre
-    <disk-store name="myPersistentStore" . . . >
-    <disk-store name="myOverflowStore" . . . >
-    ```
-
-2.  Specify the persistence and overflow criteria for the region. If you are 
not using the default disk store, provide the disk store name in your region 
attributes configuration. To write asynchronously to disk, specify 
`disk-synchronous="false"`.
-    -   For overflow, specify the overflow criteria in the region's 
`eviction-attributes` and name the disk store to use.
-
-        Example:
-
-        ``` pre
-        <region name="overflowRegion" . . . >
-          <region-attributes disk-store-name="myOverflowStore" 
disk-synchronous="true">
-            <eviction-attributes>
-              <!-- Overflow to disk when 100 megabytes of data reside in the
-                   region -->
-              <lru-memory-size maximum="100" action="overflow-to-disk"/>
-            </eviction-attributes>
-          </region-attributes>
-        </region>
-        ```
-
-        gfsh:
-
-        You cannot configure `lru-memory-size` using gfsh.
-    -   For persistence, set the `data-policy` to `persistent-replicate` and 
name the disk store to use.
-
-        Example:
-
-        ``` pre
-        <region name="partitioned_region" refid="PARTITION_PERSISTENT">
-          <region-attributes disk-store-name="myPersistentStore">
-            . . . 
-          </region-attributes>
-        </region> 
-        ```
-
-When you start your members, overflow and persistence will be done 
automatically, with the disk stores and disk write behaviors.
-
-**Note:**
-You can also configure Regions and Disk Stores using the gfsh command-line 
interface. See [Region 
Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_EF03119A40EE492984F3B6248596E1DD)
 and [Disk Store 
Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_1ACC91B493EE446E89EC7DBFBBAE00EA).
-
-<a id="storing_data_on_disk__section_0D825566F508444C98DFE57527962FED"></a>
-
-| Related Topics                                                               
         |
-|---------------------------------------------------------------------------------------|
-| `org.apache.geode.cache.RegionAttributes` for data region persistence 
information |
-| `org.apache.geode.cache.EvictionAttributes` for data region overflow 
information  |
-| `org.apache.geode.cache.server.ClientSubscriptionConfig`                     
     |
-
-

Reply via email to