http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/running_the_cacheserver.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/running/running_the_cacheserver.html.md.erb 
b/geode-docs/configuring/running/running_the_cacheserver.html.md.erb
new file mode 100644
index 0000000..9de1189
--- /dev/null
+++ b/geode-docs/configuring/running/running_the_cacheserver.html.md.erb
@@ -0,0 +1,182 @@
+---
+title:  Running Geode Server Processes
+---
+
+A Geode server is a process that runs as a long-lived, configurable member of 
a client/server system.
+
+<a id="running_the_cacheserver__section_6C2B495518C04064A181E7917CA81FC1"></a>
+The Geode server is used primarily for hosting long-lived data regions and for 
running standard Geode processes such as the server in a client/server 
configuration. You can start and stop servers using the following methods:
+
+-   The `gfsh` tool allows you to manage Geode server processes from the 
command line.
+-   You can also start, stop and manage the Geode servers through the 
`org.apache.geode.distributed.ServerLauncher` API. The `ServerLauncher` API can 
only be used for Geode Servers that were started with `gfsh` or with the 
`ServerLauncher` class itself. See the JavaDocs for additional specifics on 
using the `ServerLauncher` API.
+
+## <a id="running_the_cacheserver__section_E15FB1B039CE4F6CB2E4B5618D7ECAA1" 
class="no-quick-link"></a>Default Server Configuration and Log Files
+
+The `gfsh` utility uses a working directory for its configuration files and 
log files. These are the defaults and configuration options:
+
+-   When you start a standalone server using `gfsh`, `gfsh` will automatically 
load the required JAR files `$GEMFIRE/lib/server-dependencies.jar` and 
`$JAVA_HOME/lib/tools.jar` into the CLASSPATH of the JVM process. If you start 
a standalone server using the ServerLauncher API, you must specify 
`$GEMFIRE/lib/server-dependencies.jar` inside your command to launch the 
process. For more information on CLASSPATH settings in Geode, see [Setting Up 
the CLASSPATH](../../getting_started/setup_classpath.html).
+-   Servers are configured like any other Geode process, with 
`gemfire.properties` and shared cluster configuration files. It is not 
programmable except through application plug-ins. Typically, you provide the 
`gemfire.properties` file and the `gfsecurity.properties` file (if you are 
using a separate, restricted access security settings file). You can also 
specify a `cache.xml` file in the cache server’s working directory.
+-   By default, a new server started with `gfsh` receives its initial cache 
configuration from the cluster configuration service, assuming the locator is 
running the cluster configuration service. If you specify a group when starting 
the server, the server also receives configurations that apply to a group. The 
shared configuration consists of `cache.xml` files, `gemfire.properties` files, 
and deployed jar files. You can disable use of the cluster configuration 
service by specifying `--use-cluster-configuration=false` when starting the 
server using `gfsh`.
+
+    See [Overview of the Cluster Configuration 
Service](../cluster_config/gfsh_persist.html#concept_r22_hyw_bl).
+
+-   If you are using the Spring Framework, you can specify a Spring 
ApplicationContext XML file when starting up your server in `gfsh` by using the 
`--spring-xml-location` command-line option. This option allows you to 
bootstrap your Geode server process with your Spring application's 
configuration. See [Spring 
documentation](http://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/resources.html#resources-app-ctx)
 for more information on this file.
+-   For logging output, log file output defaults to `server_name.log` in the 
cache server's working directory. If you restart a server with the same server 
name, the existing *server\_name*.log file is automatically renamed for you 
(for example, `server1-01-01.log` or `server1-02-01.log`). You can modify the 
level of logging details in this file by specifying a level in the 
`--log-level` argument when starting up the server.
+-   By default, the server will start in a subdirectory (named after the 
server's specified `--name`) under the directory where `gfsh` is executed. This 
subdirectory is considered the current working directory. You can also specify 
a different working directory when starting the cache server in `gfsh`.
+-   By default, a server process that has been shutdown and disconnected due 
to a network partition event or member unresponsiveness will restart itself and 
automatically try to reconnect to the existing distributed system. See 
[Handling Forced Cache Disconnection Using 
Autoreconnect](../../managing/autoreconnect/member-reconnect.html#concept_22EE6DDE677F4E8CAF5786E17B4183A9)
 for more details.
+-   You can pass JVM parameters to the server's JVM by using the 
`--J=-Dproperty.name=value` upon server startup. These parameters can be Java 
properties or Geode configuration properties such as `gemfire.jmx-manager`. For 
example:
+
+    ``` pre
+    gfsh>start server --name=server1 --J=-Dgemfire.jmx-manager=true \
+    --J=-Dgemfire.jmx-manager-start=true --J=-Dgemfire.http-port=8080
+    ```
+
+-   We recommend that you do not use the `-XX:+UseCompressedStrings` and 
`-XX:+UseStringCache` JVM configuration properties when starting up servers. 
These JVM options can cause issues with data corruption and compatibility.
+
+## <a id="running_the_cacheserver__section_07001480D33745139C3707EDF8166D86" 
class="no-quick-link"></a>Start the Server
+
+The startup syntax for Geode servers in `gfsh` is:
+
+``` pre
+start server --name=value [--assign-buckets(=value)] [--bind-address=value]
+    [--cache-xml-file=value] [--classpath=value] 
[--disable-default-server(=value)]
+    [--disable-exit-when-out-of-memory(=value)] 
[--enable-time-statistics(=value)]
+    [--force(=value)] [--include-system-classpath(=value)] 
[--properties-file=value]
+    [--security-properties-file=value]
+    [--group=value] [--locators=value] [--locator-wait-time=value] 
[--log-level=value]
+    [--mcast-address=value] [--mcast-port=value] [--memcached-port=value]
+    [--memcached-protocol=value] [--rebalance(=value)] 
[--server-bind-address=value]
+    [--server-port=value] [--spring-xml-location=value]
+    [--statistic-archive-file=value] [--dir=value] [--initial-heap=value]
+    [--max-heap=value] [--use-cluster-configuration(=value)] 
[--J=value(,value)*]
+    [--critical-heap-percentage=value] [--critical-off-heap-percentage=value] 
+    [--eviction-heap-percentage=value] [--eviction-off-heap-percentage=value]
+    [--hostname-for-clients=value] [--max-connections=value]
+    [--message-time-to-live=value] [--max-message-count=value] 
[--max-threads=value]
+    [--socket-buffer-size=value] [--lock-memory=value] 
[--off-heap-memory-size=value] 
+```
+
+**Note:**
+When both `--max-heap` and `--initial-heap` are specified during server 
startup, additional GC parameters are specified internally by Geode's Resource 
Manager. If you do not want the additional default GC properties set by the 
Resource Manager, then use the `-Xms` & `-Xmx` JVM options. See [Controlling 
Heap Use with the Resource 
Manager](../../managing/heap_use/heap_management.html#configuring_resource_manager)
 for more information.
+
+The following `gfsh start server` start sequences specify a `cache.xml` file 
for cache configuration, and use different incoming client connection ports:
+
+``` pre
+gfsh>start server --name=server1 --mcast-port=10338 \
+--cache-xml-file=../ServerConfigs/cache.xml --server-port=40404
+
+gfsh>start server --name=server2 --mcast-port=10338 \
+--cache-xml-file=../ServerConfigs/cache.xml --server-port=40405
+```
+
+Here is a portion of a `gemfire.properties` file that sets the location of 
a`cache.xml` file for the server and sets the mcast-port:
+
+``` pre
+mcast-port=10338 
+cache-xml-file=D:\gfeserver\cacheCS.xml
+```
+
+To start the server using this `gemfire.properties` file, enter:
+
+``` pre
+gfsh>start server --name=server1 \
+--properties-file=D:\gfeserver\gemfire.properties
+```
+
+To start a server with an embedded JMX Manager, you can enter the following 
command:
+
+``` pre
+gfsh>start server --name=server2 \
+--J=-Dgemfire.jmx-manager=true --J=-Dgemfire.jmx-manager-start=true
+```
+
+To start a server and provide JVM configuration settings, you can issue a 
command like the following:
+
+``` pre
+gfsh>start server --name=server3 \
+--J=-Xms80m,-Xmx80m 
--J=-XX:+UseConcMarkSweepGC,-XX:CMSInitiatingOccupancyFraction=65
+```
+
+## Start the Server Programmatically
+
+Use `org.apache.geode.distributed.ServerLauncher` API to start the cache 
server process inside your code. Use the `ServerLauncher.Builder` class to 
construct an instance of the `ServerLauncher`, and then use the `start()` 
method to start the server service. The other methods in the `ServerLauncher` 
class provide status information about the server and allow you to stop the 
server.
+
+``` pre
+import org.apache.geode.distributed.ServerLauncher;
+
+ public class MyEmbeddedServer {
+
+    public static void main(String[] args){
+        ServerLauncher serverLauncher  = new ServerLauncher.Builder()
+          .setMemberName("server1")
+       .setServerPort(40405)
+          .set("jmx-manager", "true")
+          .set("jmx-manager-start", "true")
+          .build();
+
+          serverLauncher.start();  
+
+          System.out.println("Cache server successfully started");
+        }
+    }
+```
+
+## <a id="running_the_cacheserver__section_F58F229D5C7048E9915E0EC470F9A923" 
class="no-quick-link"></a>Check Server Status
+
+If you are connected to the distributed system in `gfsh`, you can check the 
status of a running cache server by providing the server name. For example:
+
+``` pre
+gfsh>status server --name=server1
+```
+
+If you are not connected to a distributed system, you can check the status of 
a local cache server by providing the process ID or the server's current 
working directory. For example:
+
+``` pre
+gfsh>status server --pid=2484
+```
+
+or
+
+``` pre
+% gfsh status server --dir=<server_working_directory>
+```
+
+where &lt;*server\_working\_directory*&gt; corresponds to the local working 
directory where the cache server is running.
+
+If successful, the command returns the following information (with the JVM 
arguments that were provided at startup):
+
+``` pre
+% gfsh status server --dir=server4
+Server in /home/user/server4 on ubuntu.local[40404] as server4 is currently 
online.
+Process ID: 3324
+Uptime: 1 minute 5 seconds
+GemFire Version: 8.0.0
+Java Version: 1.7.0_65
+Log File: /home/user/server4/server4.log
+JVM Arguments: 
+...
+```
+
+## <a id="running_the_cacheserver__section_0E4DDED6AB784B0CAFBAD538B227F487" 
class="no-quick-link"></a>Stop Server
+
+If you are connected to the distributed system in `gfsh`, you can stop a 
running cache server by providing the server name. For example:
+
+``` pre
+gfsh>stop server --name=server1
+```
+
+If you are not connected to a distributed system, you can stop a local cache 
server by specify the server's current working directory or the process ID. For 
example:
+
+``` pre
+gfsh>stop server --pid=2484
+```
+
+or
+
+``` pre
+gfsh>stop server --dir=<server_working_directory>
+```
+
+where &lt;*server\_working\_directory*&gt; corresponds to the local working 
directory where the cache server is running.
+
+You can also use the `gfsh` `shutdown` command to shut down all cache servers 
in an orderly fashion. This is useful if you are using persistent regions. See 
[Starting Up and Shutting Down Your System](starting_up_shutting_down.html) for 
more details.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/running_the_locator.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/configuring/running/running_the_locator.html.md.erb 
b/geode-docs/configuring/running/running_the_locator.html.md.erb
new file mode 100644
index 0000000..64ac02e
--- /dev/null
+++ b/geode-docs/configuring/running/running_the_locator.html.md.erb
@@ -0,0 +1,240 @@
+---
+title:  Running Geode Locator Processes
+---
+
+The locator is a Geode process that tells new, connecting members where 
running members are located and provides load balancing for server use.
+
+<a id="running_the_locator__section_E9C98E8756524552BEA9B0CA49A2069E"></a>
+You can run locators as peer locators, server locators, or both:
+
+-   Peer locators give joining members connection information to members 
already running in the locator's distributed system.
+-   Server locators give clients connection information to servers running in 
the locator's distributed system. Server locators also monitor server load and 
send clients to the least-loaded servers.
+
+By default, locators run as peer and server locators.
+
+You can run the locator standalone or embedded within another Geode process. 
Running your locators standalone provides the highest reliability and 
availability of the locator service as a whole.
+
+## <a id="running_the_locator__section_0733348268AF4D5F8851B999A6A36C53" 
class="no-quick-link"></a>Locator Configuration and Log Files
+
+Locator configuration and log files have the following properties:
+
+-   When you start a standalone locator using `gfsh`, `gfsh` will 
automatically load the required JAR files 
(`$GEMFIRE/lib/locator-dependencies.jar`) into the CLASSPATH of the JVM 
process. If you start a standalone locator using the `LocatorLauncher` API, you 
must specify `$GEMFIRE/lib/locator-dependencies.jar` inside the command used to 
launch the locator process. For more information on CLASSPATH settings in 
Geode, see [CLASSPATH Settings for Geode 
Processes](../../getting_started/setup_classpath.html). You can modify the 
CLASSPATH by specifying the `--classpath` parameter.
+-   Locators are members of the distributed system just like any other member. 
In terms of `mcast-port` and `locators` configuration, a locator should be 
configured in the same manner as a server. Therefore, if there are two other 
locators in the distributed system, each locator should reference the other 
locators (just like a server member would). For example:
+
+    ``` pre
+    gfsh> start locator --name=locator1 --port=9009 --mcast-port=0 \
+    --locators='host1[9001],host2[9003]'
+    ```
+
+-   You can configure locators within the `gemfire.properties` file or by 
specifying start-up parameters on the command line. If you are specifying the 
locator's configuration in a properties file, locators require the same 
`gemfire.properties` settings as other members of the distributed system and 
the same `gfsecurity.properties` settings if you are using a separate, 
restricted access security settings file.
+
+    For example, to configure both locators and a multicast port in 
`gemfire.properties:`
+
+    ``` pre
+    locators=host1[9001],host2[9003]
+    mcast-port=0
+    ```
+
+-   There is no cache configuration specific to locators.
+-   For logging output, the locator creates a log file in its current working 
directory. Log file output defaults to `locator_name.log` in the locator's 
working directory. If you restart a locator with a previously used locator 
name, the existing *locator\_name*.log file is automatically renamed for you 
(for example, `locator1-01-01.log` or `locator1-02-01.log`). You can modify the 
level of logging details in this file by specifying a level in the 
`--log-level` argument when starting up the locator.
+-   By default, a locator will start in a subdirectory (named after the 
locator) under the directory where `gfsh` is executed. This subdirectory is 
considered the current working directory. You can also specify a different 
working directory when starting the locator in `gfsh`.
+-   By default, a locator that has been shutdown and disconnected due to a 
network partition event or member unresponsiveness will restart itself and 
automatically try to reconnect to the existing distributed system. When a 
locator is in the reconnecting state, it provides no discovery services for the 
distributed system. See [Handling Forced Cache Disconnection Using 
Autoreconnect](../../managing/autoreconnect/member-reconnect.html) for more 
details.
+
+## <a id="running_the_locator__section_wst_ykb_rr" 
class="no-quick-link"></a>Locators and the Cluster Configuration Service
+
+Locators use the cluster configuration service to save configurations that 
apply to all cluster members, or to members of a specified group. The 
configurations are saved in the Locator's directory and are propagated to all 
locators in a distributed system. When you start servers using `gfsh`, the 
servers receive the group-level and cluster-level configurations from the 
locators.
+
+See [Overview of the Cluster Configuration 
Service](../cluster_config/gfsh_persist.html).
+
+## <a id="running_the_locator__section_FF25228E30624E04ACA8784A2183D585" 
class="no-quick-link"></a>Start the Locator
+
+Use the following guidelines to start the locator:
+
+-   **Standalone locator**. Start a standalone locator in one of these ways:
+    -   Use the `gfsh` command-line utility. See [`gfsh` (Geode 
SHell)](../../tools_modules/gfsh/chapter_overview.html) for more information on 
using `gfsh`. For example:
+
+        ``` pre
+        gfsh>start locator --name=locator1
+
+        gfsh> start locator --name=locator2 --bind-address=192.0.2.0 
--port=13489
+        ```
+
+    -   Start the locator using the `main` method in the 
`org.apache.geode.distributed.LocatorLauncher` class and the Java executable. 
For example:
+
+        ``` pre
+        working/directory/of/Locator/process$java -server \
+         -classpath 
"$GEMFIRE/lib/locator-dependencies.jar:/path/to/application/classes.jar" \
+         org.apache.geode.distributed.LocatorLauncher start Locator1 
--port=11235 \
+          --redirect-output
+        ```
+
+        Specifically, you use the `LocatorLauncher` class API to run an 
embedded Locator service in Java application processes that you have created. 
The directory where you execute the java command becomes the working directory 
for the locator process.
+
+    -   When starting up multiple locators, do not start them up in parallel 
(in other words, simultaneously). As a best practice, you should wait 
approximately 30 seconds for the first locator to complete startup before 
starting any other locators. To check the successful startup of a locator, 
check for locator log files. To view the uptime of a running locator, you can 
use the `gfsh status locator` command.
+
+-   **Embedded (colocated) locator**. Manage a colocated locator at member 
startup or through the APIs:
+    -   Use the `gemfire.properties` `start-locator` setting to start the 
locator automatically inside your Geode member. See the 
[Reference](../../reference/book_intro.html#reference). The locator stops 
automatically when the member exits. The property has the following syntax:
+
+        ``` pre
+        #gemfire.properties
+        start-locator=[address]port[,server={true|false},peer={true|false}]
+        ```
+
+        Example:
+
+        ``` pre
+        #gemfire.properties
+        start-locator=13489
+        ```
+
+    -   Use `org.apache.geode.distributed.LocatorLauncher` API to start the 
locator inside your code. Use the `LocatorLauncher.Builder` class to construct 
an instance of the `LocatorLauncher`, and then use the `start()` method to 
start a Locator service embedded in your Java application process. The other 
methods in the `LocatorLauncher` class provide status information about the 
locator and allow you to stop the locator.
+
+        ``` pre
+        import org.apache.geode.distributed.LocatorLauncher;
+
+         public class MyEmbeddedLocator {
+
+            public static void main(String[] args){
+                LocatorLauncher locatorLauncher  = new 
LocatorLauncher.Builder()
+                  .setMemberName("locator1")
+                  .setPort(13489)
+                  .build();
+
+                  locatorLauncher.start();
+
+                  System.out.println("Locator successfully started");
+                }
+            }
+        ```
+
+        Here's another example that embeds the locator within an application, 
starts it and then checks the status of the locator before allowing other 
members to access it:
+
+        ``` pre
+        package example;
+
+        import ...
+
+        class MyApplication implements Runnable {
+
+          private final LocatorLauncher locatorLauncher;
+
+          public MyApplication(final String... args) {
+            validateArgs(args);
+
+            locatorLauncher = new LocatorLauncher.Builder()
+              .setMemberName(args[0])
+              .setPort(Integer.parseInt(args[1])
+              .setRedirectOutput(true)
+              .build();
+          }
+
+          protected void args(final String[] args) {
+            ...
+          }
+
+          public void run() {
+            ...
+
+            // start the Locator in-process
+            locatorLauncher.start();
+
+            // wait for Locator to start and be ready to accept member 
(client) connections
+            locatorLauncher.waitOnStatusResponse(30, 5, TimeUnit.SECONDS);
+
+            ...
+          }
+
+          public static void main(final String... args) {
+            new MyApplication(args).run();
+          }
+
+        }
+        ```
+
+        Then to execute the application, you would run:
+
+        ``` pre
+        /working/directory/of/MyApplication$ java \
+         -server -classpath 
"$GEMFIRE/lib/locator-dependencies.jar:/path/to/application/classes.jar" \
+         example.MyApplication Locator1 11235
+        ```
+
+        The directory where you execute the java command becomes the working 
directory for the locator process.
+
+## <a id="running_the_locator__section_F58F229D5C7048E9915E0EC470F9A923" 
class="no-quick-link"></a>Check Locator Status
+
+If you are connected to the distributed system in `gfsh`, you can check the 
status of a running Locator by providing the Locator name. For example:
+
+``` pre
+gfsh>status locator --name=locator1
+```
+
+If you are not connected to a distributed system, you can check the status of 
a local Locator by providing the process ID, the Locator's hostname and port, 
or the Locator's current working directory. For example:
+
+``` pre
+gfsh>status locator --pid=2986
+```
+
+or
+
+``` pre
+gfsh>status locator --host=host1 --port=1035
+```
+
+or
+
+``` pre
+$ gfsh status locator --dir=<locator_working_directory>
+```
+
+where &lt;*locator\_working\_directory*&gt; corresponds to the local working 
directory where the locator is running.
+
+If successful, the command returns the following information (with the JVM 
arguments that were provided at startup):
+
+``` pre
+$ gfsh status locator --dir=locator1
+Locator in /home/user/locator1 on ubuntu.local[10334] as locator1 is currently 
online.
+Process ID: 2359
+Uptime: 17 minutes 3 seconds
+GemFire Version: 8.0.0
+Java Version: 1.7.0_65
+Log File: /home/user/locator1/locator1.log
+JVM Arguments: -Dgemfire.enable-cluster-configuration=true 
-Dgemfire.load-cluster-configuration-from-dir=false
+ -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
+Class-Path: 
/home/user/Pivotal_GemFire_800_b48319_Linux/lib/locator-dependencies.jar:/usr/local/java/lib/tools.jar
+
+Cluster configuration service is up and running.
+```
+
+## <a id="running_the_locator__section_0E4DDED6AB784B0CAFBAD538B227F487" 
class="no-quick-link"></a>Stop the Locator
+
+If you are connected to the distributed system in `gfsh`, you can stop a 
running locator by providing the locator name. For example:
+
+``` pre
+gfsh>stop locator --name=locator1
+```
+
+If you are not connected to a distributed system, you can stop a local locator 
by specifying the locator's process ID or the locator's current working 
directory. For example:
+
+``` pre
+gfsh>stop locator --pid=2986
+```
+
+or
+
+``` pre
+gfsh>stop locator --dir=<locator_working_directory>
+```
+
+where &lt;*locator\_working\_directory*&gt; corresponds to the local working 
directory where the locator is running.
+
+## Locators and Multi-Site (WAN) Deployments
+
+If you use a multi-site (WAN) configuration, you can connect a locator to a 
remote site when starting the locator.
+
+To connect a new locator process to a remote locator in a WAN configuration, 
specify the following at startup:
+
+``` pre
+gfsh> start locator --name=locator1 --port=9009 --mcast-port=0 \
+--J='-Dgemfire.remote-locators=192.0.2.0[9009],198.51.100.0[9009]'
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/configuring/running/starting_up_shutting_down.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/configuring/running/starting_up_shutting_down.html.md.erb 
b/geode-docs/configuring/running/starting_up_shutting_down.html.md.erb
new file mode 100644
index 0000000..2afea6a
--- /dev/null
+++ b/geode-docs/configuring/running/starting_up_shutting_down.html.md.erb
@@ -0,0 +1,129 @@
+---
+title:  Starting Up and Shutting Down Your System
+---
+
+Determine the proper startup and shutdown procedures, and write your startup 
and shutdown scripts.
+
+Well-designed procedures for starting and stopping your system can speed 
startup and protect your data. The processes you need to start and stop include 
server and locator processes and your other Geode applications, including 
clients. The procedures you use depend in part on your system’s configuration 
and the dependencies between your system processes.
+
+Use the following guidelines to create startup and shutdown procedures and 
scripts. Some of these instructions use [`gfsh` (Geode 
SHell)](../../tools_modules/gfsh/chapter_overview.html).
+
+## <a id="starting_up_shutting_down__section_3D111558326D4A38BE48C17D44BB66DB" 
class="no-quick-link"></a>Starting Up Your System
+
+You should follow certain order guidelines when starting your Geode system.
+
+Start server-distributed systems before you start their client applications. 
In each distributed system, follow these guidelines for member startup:
+
+-   Start locators first. See [Running Geode Locator 
Processes](running_the_locator.html) for examples of locator start up commands.
+-   Start cache servers before the rest of your processes unless the 
implementation requires that other processes be started ahead of them. See 
[Running Geode Server Processes](running_the_cacheserver.html) for examples of 
server start up commands.
+-   If your distributed system uses both persistent replicated and 
non-persistent replicated regions, you should start up all the persistent 
replicated members in parallel before starting the non-persistent regions. This 
way, persistent members will not delay their startup for other persistent 
members with later data.
+-   For a system that includes persistent regions, see [Start Up and Shut Down 
with Disk 
Stores](../../managing/disk_storage/starting_system_with_disk_stores.html).
+-   If you are running producer processes and consumer or event listener 
processes, start the consumers first. This ensures the consumers and listeners 
do not miss any notifications or updates.
+-   If you are starting up your locators and peer members all at once, you can 
use the `locator-wait-time` property (in seconds) upon process start up. This 
timeout allows peers to wait for the locators to finish starting up before 
attempting to join the distributed system. If a process has been configured to 
wait for a locator to start, it will log an info-level message
+
+    > `GemFire startup was unable to contact a                             
locator. Waiting for one to start. Configured locators are                      
       frodo[12345],pippin[12345]. `
+
+    The process will then sleep for a second and retry until it either 
connects or the number of seconds specified in `locator-wait-time` has elapsed. 
By default, `locator-wait-time` is set to zero meaning that a process that 
cannot connect to a locator upon startup will throw an exception.
+
+**Note:**
+You can optionally override the default timeout period for shutting down 
individual processes. This override setting must be specified during member 
startup. See [Shutting Down the 
System](#starting_up_shutting_down__section_mnx_4cp_cv) for details.
+
+## <a id="starting_up_shutting_down__section_2F8ABBFCE641463C8A8721841407993D" 
class="no-quick-link"></a>Starting Up After Losing Data on Disk
+
+This information pertains to catastrophic loss of Geode disk store files. If 
you lose disk store files, your next startup may hang, waiting for the lost 
disk stores to come back online. If your system hangs at startup, use the 
`gfsh` command `show missing-disk-store` to list missing disk stores and, if 
needed, revoke missing disk stores so your system startup can complete. You 
must use the Disk Store ID to revoke a disk store. These are the two commands:
+
+``` pre
+gfsh>show missing-disk-stores
+
+Disk Store ID             |   Host    |               Directory                
                           
+------------------------------------ | --------- | 
-------------------------------------
+60399215-532b-406f-b81f-9b5bd8d1b55a | excalibur | 
/usr/local/gemfire/deploy/disk_store1 
+
+gfsh>revoke missing-disk-store --id=60399215-532b-406f-b81f-9b5bd8d1b55a
+```
+
+**Note:**
+This `gfsh` commands require that you are connected to the distributed system 
via a JMX Manager node.
+
+## <a id="starting_up_shutting_down__section_mnx_4cp_cv" 
class="no-quick-link"></a>Shutting Down the System
+
+Shut down your Geode system by using either the `gfsh` `shutdown` command or 
by shutting down individual members one at a time.
+
+## <a id="starting_up_shutting_down__section_0EB4DDABB6A348BA83B786EEE7C84CF1" 
class="no-quick-link"></a>Using the shutdown Command
+
+If you are using persistent regions, (members are persisting data to disk), 
you should use the `gfsh` `shutdown` command to stop the running system in an 
orderly fashion. This command synchronizes persistent partitioned regions 
before shutting down, which makes the next startup of the distributed system as 
efficient as possible.
+
+If possible, all members should be running before you shut them down so 
synchronization can occur. Shut down the system using the following `gfsh` 
command:
+
+``` pre
+gfsh>shutdown
+```
+
+By default, the shutdown command will only shut down data nodes. If you want 
to shut down all nodes including locators, specify the 
`--include-locators=true` parameter. For example:
+
+``` pre
+gfsh>shutdown --include-locators=true
+```
+
+This will shut down all locators one by one, shutting down the manager last.
+
+To shutdown all data members after a grace period, specify a time-out option 
(in seconds).
+
+``` pre
+gfsh>shutdown --time-out=60
+```
+
+To shutdown all members including locators after a grace period, specify a 
time-out option (in seconds).
+
+``` pre
+gfsh>shutdown --include-locators=true --time-out=60
+```
+
+## <a id="starting_up_shutting_down__section_A07D40BC118544D0984860A3B4A5CB29" 
class="no-quick-link"></a>Shutting Down System Members Individually
+
+If you are not using persistent regions, you can shut down the distributed 
system by shutting down each member in the reverse order of their startup. (See 
[Starting Up Your 
System](#starting_up_shutting_down__section_3D111558326D4A38BE48C17D44BB66DB) 
for the recommended order of member startup.)
+
+Shut down the distributed system members according to the type of member. For 
example, use the following mechanisms to shut down members:
+
+-   Use the appropriate mechanism to shut down any Geode-connected client 
applications that are running in the distributed system.
+-   Shut down any cache servers. To shut down a server, issue the following 
`gfsh` command:
+
+    ``` pre
+    gfsh>stop server --name=<...>
+    ```
+
+    or
+
+    ``` pre
+    gfsh>stop server --dir=<server_working_dir>
+    ```
+
+-   Shut down any locators. To shut down a locator, issue the following `gfsh` 
command:
+
+    ``` pre
+    gfsh>stop locator --name=<...>
+    ```
+
+    or
+
+    ``` pre
+    gfsh>stop locator --dir=<locator_working_dir>
+    ```
+
+## <a id="starting_up_shutting_down__section_7CF680CF8A924C57A7052AE2F975DA81" 
class="no-quick-link"></a>Option for System Member Shutdown Behavior
+
+The `DISCONNECT_WAIT` command line argument sets the maximum time for each 
individual step in the shutdown process. If any step takes longer than the 
specified amount, it is forced to end. Each operation is given this grace 
period, so the total length of time the cache member takes to shut down depends 
on the number of operations and the `DISCONNECT_WAIT` setting. During the 
shutdown process, Geode produces messages such as:
+
+``` pre
+Disconnect listener still running
+```
+
+The `DISCONNECT_WAIT` default is 10000 milliseconds.
+
+To change it, set this system property on the Java command line used for 
member startup. For example:
+
+``` pre
+gfsh>start server --J=-DDistributionManager.DISCONNECT_WAIT=<milliseconds>
+```
+
+Each process can have different `DISCONNECT_WAIT` settings.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/book_intro.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/book_intro.html.md.erb 
b/geode-docs/developing/book_intro.html.md.erb
new file mode 100644
index 0000000..432d6da
--- /dev/null
+++ b/geode-docs/developing/book_intro.html.md.erb
@@ -0,0 +1,57 @@
+---
+title:  Developing with Apache Geode
+---
+
+*Developing with Apache Geode* explains main concepts of application 
programming with Apache Geode. It describes how to plan and implement regions, 
data serialization, event handling, delta propagation, transactions, and more.
+
+For information about Geode REST application development, see [Developing REST 
Applications for Apache Geode](../rest_apps/book_intro.html).
+
+-   **[Region Data Storage and 
Distribution](../developing/region_options/chapter_overview.html)**
+
+    The Apache Geode data storage and distribution models put your data in the 
right place at the right time. You should understand all the options for data 
storage in Geode before you start configuring your data regions.
+
+-   **[Partitioned 
Regions](../developing/partitioned_regions/chapter_overview.html)**
+
+    In addition to basic region management, partitioned regions include 
options for high availability, data location control, and data balancing across 
the distributed system.
+
+-   **[Distributed and Replicated 
Regions](../developing/distributed_regions/chapter_overview.html)**
+
+    In addition to basic region management, distributed and replicated regions 
include options for things like push and pull distribution models, global 
locking, and region entry versions to ensure consistency across Geode members.
+
+-   **[Consistency for Region 
Updates](../developing/distributed_regions/region_entry_versions.html)**
+
+    Geode ensures that all copies of a region eventually reach a consistent 
state on all members and clients that host the region, including Geode members 
that distribute region events.
+
+-   **[General Region Data 
Management](../developing/management_all_region_types/chapter_overview.html)**
+
+    For all regions, you have options to control memory use, back up your data 
to disk, and keep stale data out of your cache.
+
+-   **[Data 
Serialization](../developing/data_serialization/chapter_overview.html)**
+
+    Data that you manage in Geode must be serialized and deserialized for 
storage and transmittal between processes. You can choose among several options 
for data serialization.
+
+-   **[Events and Event Handling](../developing/events/chapter_overview.html)**
+
+    Geode provides versatile and reliable event distribution and handling for 
your cached data and system member events.
+
+-   **[Delta 
Propagation](../developing/delta_propagation/chapter_overview.html)**
+
+    Delta propagation allows you to reduce the amount of data you send over 
the network by including only changes to objects rather than the entire object.
+
+-   **[Querying](../developing/querying_basics/chapter_overview.html)**
+
+    Geode provides a SQL-like querying language called OQL that allows you to 
access data stored in Geode regions.
+
+-   **[Continuous 
Querying](../developing/continuous_querying/chapter_overview.html)**
+
+    Continuous querying continuously returns events that match the queries you 
set up.
+
+-   **[Transactions](../developing/transactions/chapter_overview.html)**
+
+    Geode provides a transactions API, with `begin`, `commit`, and `rollback` 
methods. These methods are much the same as the familiar relational database 
transactions methods.
+
+-   **[Function Execution](../developing/function_exec/chapter_overview.html)**
+
+    A function is a body of code that resides on a server and that an 
application can invoke from a client or from another server without the need to 
send the function code itself. The caller can direct a data-dependent function 
to operate on a particular dataset, or can direct a data-independent function 
to operate on a particular server, member, or member group.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb 
b/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb
new file mode 100644
index 0000000..c16d4b5
--- /dev/null
+++ b/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb
@@ -0,0 +1,21 @@
+---
+title:  Continuous Querying
+---
+
+Continuous querying continuously returns events that match the queries you set 
up.
+
+<a id="continuous__section_779B4E4D06E948618E5792335174E70D"></a>
+
+-   **[How Continuous Querying 
Works](../../developing/continuous_querying/how_continuous_querying_works.html)**
+
+    Clients subscribe to server-side events by using SQL-type query filtering. 
The server sends all events that modify the query results. CQ event delivery 
uses the client/server subscription framework.
+
+-   **[Implementing Continuous 
Querying](../../developing/continuous_querying/implementing_continuous_querying.html)**
+
+    Use continuous querying in your clients to receive continuous updates to 
queries run on the servers.
+
+-   **[Managing Continuous 
Querying](../../developing/continuous_querying/continuous_querying_whats_next.html)**
+
+    This topic discusses CQ management options, CQ states, and retrieving 
initial result sets.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/continuous_querying/continuous_querying_whats_next.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/continuous_querying/continuous_querying_whats_next.html.md.erb
 
b/geode-docs/developing/continuous_querying/continuous_querying_whats_next.html.md.erb
new file mode 100644
index 0000000..db36016
--- /dev/null
+++ 
b/geode-docs/developing/continuous_querying/continuous_querying_whats_next.html.md.erb
@@ -0,0 +1,71 @@
+---
+title:  Managing Continuous Querying
+---
+
+This topic discusses CQ management options, CQ states, and retrieving initial 
result sets.
+
+## Using CQs from a RegionService Instance
+
+If you are running durable client queues (CQs) from the `RegionService` 
instance, stop and start the offline event storage for the client as a whole. 
The server manages one queue for the entire client process, so you need to 
request the stop and start of durable CQ event messaging for the cache as a 
whole, through the `ClientCache` instance. If you closed the `RegionService` 
instances, event processing would stop, but the server would continue to send 
events, and those events would be lost.
+
+Stop with:
+
+``` pre
+clientCache.close(true);
+```
+
+Start up again in this order:
+
+1.  Create `ClientCache` instance.
+2.  Create all `RegionService` instances. Initialize CQ listeners.
+3.  Call `ClientCache` instance `readyForEvents` method.
+
+## <a 
id="continuous_querying_whats_next__section_35F929682CD24478AF0B2249C5065A27" 
class="no-quick-link"></a>States of a CQ
+
+A CQ has three possible states, which are maintained on the server. You can 
check them from the client through `CqQuery.getState`.
+
+| Query State | What does this mean?                                           
                                    | When does the CQ reach this state?        
                                                                     | Notes    
                                                                                
                                                                                
                                                                                
                                                                                
                      |
+|-------------|----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| STOPPED     | The CQ is in place and ready to run, but is not running.       
                                    | When CQ is first created and after being 
stopped from a running state.                                         | A 
stopped CQ uses system resources. Stopping a CQ only stops the CQ event 
messaging from server to client. All server-side CQ processing continues, but 
new CQ events are not placed into the server's client queue. Stopping a CQ does 
not change anything on the client side (but, of course, the client stops 
receiving events for the CQ that is stopped). |
+| RUNNING     | The CQ is running against server region events and the client 
listeners are waiting for CQ events. | When CQ is executed from a stopped 
state.                                                                      | 
This is the only state in which events are sent to the client.                  
                                                                                
                                                                                
                                                                                
                               |
+| CLOSED      | The CQ is not available for any further activities. You cannot 
rerun a closed CQ.                  | When CQ is closed by the client and when 
cache or connection conditions make it impossible to maintain or run. | The 
closed CQ does not use system resources.                                        
                                                                                
                                                                                
                                                                                
                           |
+
+## <a 
id="continuous_querying_whats_next__section_4E308A70BCE44031BB1F37B95B4D06E6" 
class="no-quick-link"></a>CQ Management Options
+
+You manage your CQs from the client side. All calls are executed only for the 
calling client's CQs.
+
+| Task                                         | For a single CQ use ...       
                            | For groups of CQs use ...                 |
+|----------------------------------------------|-----------------------------------------------------------|-------------------------------------------|
+| Create a CQ                                  | `QueryService.newCq`          
                            | N/A                                       |
+| Execute a CQ                                 | `CqQuery.execute` and 
`CqQuery.executeWithInitialResults` | `QueryService.executeCqs`                 
|
+| Stop a CQ                                    | `CqQuery.stop`                
                            | `QueryService.stopCqs`                    |
+| Close a CQ                                   | `CqQuery.close`               
                            | `QueryService.closeCqs`                   |
+| Access a CQ                                  | `CqEvent.getCq` and 
`QueryService.getCq`                  | `QueryService.getCq`                    
  |
+| Modify CQ Listeners                          | 
`CqQuery.getCqAttributesMutator`                          | N/A                 
                      |
+| Access CQ Runtime Statistics                 | `CqQuery.getStatistics`       
                            | `QueryService.getCqStatistics`            |
+| Get all durable CQs registered on the server | N/A                           
                            | `QueryService.getAllDurableCqsFromServer` |
+
+## <a 
id="continuous_querying_whats_next__section_B274DA982AE6441288323A1D11B58786" 
class="no-quick-link"></a>Managing CQs and Durable Clients Using gfsh
+
+Using the `gfsh` command-line utility, you can perform the following actions:
+
+-   Close durable clients and durable client CQs. See 
[close](../../tools_modules/gfsh/command-pages/close.html#topic_27555B1929D7487D9158096BC065D372).
+-   List all durable CQs for a given durable client ID. See 
[list](../../tools_modules/gfsh/command-pages/list.html).
+-   Show the subscription event queue size for a given durable client ID. See 
[show 
subscription-queue-size](../../tools_modules/gfsh/command-pages/show.html#topic_395C96B500AD430CBF3D3C8886A4CD2E).
+
+## <a 
id="continuous_querying_whats_next__section_345E9C144EB544FBA61FC9C83BF1C1ED" 
class="no-quick-link"></a>Retrieving an Initial Result Set of a CQ
+
+You can optionally retrieve an initial result set when you execute your CQ. To 
do this, execute the CQ with the `executeWithInitialResults` method. The 
initial `SelectResults` returned is the same that you would get if you ran the 
query ad hoc, by calling `QueryService.newQuery.execute` on the server cache, 
but with the key included. This example retrieves keys and values from an 
initial result set:
+
+``` pre
+SelectResults cqResults = cq.executeWithInitialResults();
+for (Object o : cqResults.asList()) {
+  Struct s = (Struct) o; // Struct with Key, value pair
+  Portfolio p = (Portfolio) s.get("value"); // get value from the Struct
+  String id = (String) s.get("key"); // get key from the Struct
+}
+```
+
+If you are managing a data set from the CQ results, you can initialize the set 
by iterating over the result set and then updating it from your listeners as 
events arrive. For example, you might populate a new screen with initial 
results and then update the screen from a CQ listener.
+
+If a CQ is executed using the `ExecuteWithInitialResults` method, the returned 
result may already include the changes with respect to the event. This can 
arise when updates are happening on the region while CQ registration is in 
progress. The CQ does not block any region operation as it could affect the 
performance of the region operation. Design your application to synchronize 
between the region operation and CQ registration to avoid duplicate events from 
being delivered.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/continuous_querying/how_continuous_querying_works.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/continuous_querying/how_continuous_querying_works.html.md.erb
 
b/geode-docs/developing/continuous_querying/how_continuous_querying_works.html.md.erb
new file mode 100644
index 0000000..67facc9
--- /dev/null
+++ 
b/geode-docs/developing/continuous_querying/how_continuous_querying_works.html.md.erb
@@ -0,0 +1,81 @@
+---
+title:  How Continuous Querying Works
+---
+
+Clients subscribe to server-side events by using SQL-type query filtering. The 
server sends all events that modify the query results. CQ event delivery uses 
the client/server subscription framework.
+
+<a 
id="how_continuous_querying_works__section_D473C4D532E14044820B7D76DEE83450"></a>
+With CQ, the client sends a query to the server side for execution and 
receives the events that satisfy the criteria. For example, in a region storing 
stock market trade orders, you can retrieve all orders over a certain price by 
running a CQ with a query like this:
+
+``` pre
+SELECT * FROM /tradeOrder t WHERE t.price > 100.00
+```
+
+When the CQ is running, the server sends the client all new events that affect 
the results of the query. On the client side, listeners programmed by you 
receive and process incoming events. For this example query on `/tradeOrder`, 
you might program a listener to push events to a GUI where higher-priced orders 
are displayed. CQ event delivery uses the client/server subscription framework.
+
+## <a 
id="how_continuous_querying_works__section_777DEEA9D1DD45F59EC1BB35789C3A5D" 
class="no-quick-link"></a>Logical Architecture of Continuous Querying
+
+Your clients can execute any number of CQs, with each CQ assigned any number 
of listeners.
+
+<img src="../../images/ContinuousQuerying-1.gif" 
id="how_continuous_querying_works__image_B7C36491E8CA4376AEAE4E030C3DF86B" 
class="image" />
+
+## <a 
id="how_continuous_querying_works__section_F0E19919B3F645EF83EACBD7AFDF527E" 
class="no-quick-link"></a>Data Flow with CQs
+
+CQs do not update the client region. This is in contrast to other 
server-to-client messaging like the updates sent to satisfy interest 
registration and responses to get requests from the client's `Pool`. CQs serve 
as notification tools for the CQ listeners, which can be programmed in any way 
your application requires.
+
+When a CQ is running against a server region, each entry event is evaluated 
against the CQ query by the thread that updates the server cache. If either the 
old or the new entry value satisfies the query, the thread puts a `CqEvent` in 
the client's queue. The `CqEvent` contains information from the original cache 
event plus information specific to the CQ's execution. Once received by the 
client, the `CqEvent` is passed to the `onEvent` method of all `CqListener`s 
defined for the CQ.
+
+Here is the typical CQ data flow for entries updated in the server cache:
+
+1.  Entry events come to the server's cache from the server or its peers, 
distribution from remote sites, or updates from a client.
+2.  For each event, the server's CQ executor framework checks for a match with 
its running CQs.
+3.  If the old or new entry value satisfies a CQ query, a CQ event is sent to 
the CQ's listeners on the client side. Each listener for the CQ gets the event.
+
+In the following figure:
+
+-   Both the new and old prices for entry X satisfy the CQ query, so that 
event is sent indicating an update to the query results.
+-   The old price for entry Y satisfied the query, so it was part of the query 
results. The invalidation of entry Y makes it not satisfy the query. Because of 
this, the event is sent indicating that it is destroyed in the query results.
+-   The price for the newly created entry Z does not satisfy the query, so no 
event is sent.
+
+<img src="../../images/ContinuousQuerying-3.gif" 
id="how_continuous_querying_works__image_2F21A3820906449FAABE7ACC9654A564" 
class="image" />
+
+## <a 
id="how_continuous_querying_works__section_819CDBA814024315A6DDA83BD56D125C" 
class="no-quick-link"></a>CQ Events
+
+CQ events do not change your client cache. They are provided as an event 
service only. This allows you to have any collection of CQs without storing 
large amounts of data in your regions. If you need to persist information from 
CQ events, program your listener to store the information where it makes the 
most sense for your application.
+
+The `CqEvent` object contains this information:
+
+-   Entry key and new value.
+-   Base operation that triggered the cache event in the server. This is the 
standard `Operation` class instance used for cache events in GemFire.
+-   `CqQuery` object associated with this CQ event.
+-   `Throwable` object, returned only if an error occurred when the `CqQuery` 
ran for the cache event. This is non-null only for `CqListener` onError calls.
+-   Query operation associated with this CQ event. This operation describes 
the change affected to the query results by the cache event. Possible values 
are:
+    -   `CREATE`, which corresponds to the standard database
+    -   `INSERT` operation
+    -   `UPDATE`
+    -   `DESTROY`, which corresponds to the standard database DELETE operation
+
+Region operations do not translate to specific query operations and query 
operations do not specifically describe region events. Instead, the query 
operation describes how the region event affects the query results.
+
+| Query operations based on old and new entry values | New value does not 
satisfy the query | New value satisfies the query |
+|----------------------------------------------------|--------------------------------------|-------------------------------|
+| Old value does not satisfy the query               | no event                
             | `CREATE` query operation      |
+| Old value does satisfies the query                 | `DESTROY` query 
operation            | `UPDATE` query operation      |
+
+You can use the query operation to decide what to do with the `CqEvent` in 
your listeners. For example, a `CqListener` that displays query results on 
screen might stop displaying the entry, start displaying the entry, or update 
the entry display depending on the query operation.
+
+## <a id="how_continuous_querying_works__section_bfs_llr_gr" 
class="no-quick-link"></a>Region Type Restrictions for CQs
+
+You can only create CQs on replicated or partitioned regions. If you attempt 
to create a CQ on a non-replicated or non-partitioned region, you will receive 
the following error message:
+
+``` pre
+The region <region name> specified in CQ creation is neither replicated nor 
partitioned; only replicated or partitioned regions are allowed in CQ creation.
+```
+
+In addition, you cannot create a CQ on a replicated region with eviction 
setting of local-destroy since this eviction setting changes the region's data 
policy. If you attempt to create a CQ on this kind of region, you will receive 
the following error message:
+
+``` pre
+CQ is not supported for replicated region: <region name> with eviction action: 
LOCAL_DESTROY
+```
+
+See also [Configure Distributed, Replicated, and Preloaded 
Regions](../distributed_regions/managing_distributed_regions.html) for 
potential issues with setting local-destroy eviction on replicated regions.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/continuous_querying/implementing_continuous_querying.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/continuous_querying/implementing_continuous_querying.html.md.erb
 
b/geode-docs/developing/continuous_querying/implementing_continuous_querying.html.md.erb
new file mode 100644
index 0000000..de4c81e
--- /dev/null
+++ 
b/geode-docs/developing/continuous_querying/implementing_continuous_querying.html.md.erb
@@ -0,0 +1,185 @@
+---
+title:  Implementing Continuous Querying
+---
+
+Use continuous querying in your clients to receive continuous updates to 
queries run on the servers.
+
+CQs are only run by a client on its servers.
+
+Before you begin, you should be familiar with 
[Querying](../querying_basics/chapter_overview.html) and have your 
client/server system configured.
+
+1. Configure the client pools you will use for CQs with `subscription-enabled` 
set to true.
+
+    To have CQ and interest subscription events arrive as closely together as 
possible, use a single pool for everything. Different pools might use different 
servers, which can lead to greater differences in event delivery time.
+
+2. Write your OQL query to retrieve the data you need from the server.
+
+    The query must satisfy these CQ requirements in addition to the standard 
GemFire querying specifications:
+    -   The FROM clause must contain only a single region specification, with 
optional iterator variable.
+    -   The query must be a SELECT expression only, preceded by zero or more 
IMPORT statements. This means the query cannot be a statement such as 
<code>"/tradeOrder.name"</code> or <code>"(SELECT \* from 
/tradeOrder).size".</code>
+    -   The CQ query cannot use:
+        -   Cross region joins
+        -   Drill-downs into nested collections
+        -   DISTINCT
+        -   Projections
+        -   Bind parameters
+    -   The CQ query must be created on a partitioned or replicated region. 
See [Region Type Restrictions for 
CQs](how_continuous_querying_works.html#how_continuous_querying_works__section_bfs_llr_gr).
+
+    The basic syntax for the CQ query is:
+
+    ``` pre
+    SELECT * FROM /fullRegionPath [iterator] [WHERE clause]
+    ```
+
+    This example query could be used to get all trade orders where the price 
is over $100:
+
+    ``` pre
+    SELECT * FROM /tradeOrder t WHERE t.price > 100.00
+    ```
+
+3. Write your CQ listeners to handle CQ events from the server.
+    Implement `org.apache.geode.cache.query.CqListener` in each event handler 
you need. In addition to your main CQ listeners, you might have listeners that 
you use for all CQs to track statistics or other general information.
+
+    **Note:**
+    Be especially careful if you choose to update your cache from your 
`CqListener`. If your listener updates the region that is queried in its own CQ 
and that region has a `Pool` named, the update will be forwarded to the server. 
If the update on the server satisfies the same CQ, it may be returned to the 
same listener that did the update, which could put your application into an 
infinite loop. This same scenario could be played out with multiple regions and 
multiple CQs, if the listeners are programmed to update each other's regions.
+
+    This example outlines a `CqListener` that might be used to update a 
display screen with current data from the server. The listener gets the 
`queryOperation` and entry key and value from the `CqEvent` and then updates 
the screen according to the type of `queryOperation`.
+
+    ``` pre
+    // CqListener class
+    public class TradeEventListener implements CqListener
+    {
+      public void onEvent(CqEvent cqEvent)
+      {
+        // org.apache.geode.cache Operation associated with the query op
+        Operation queryOperation = cqEvent.getQueryOperation();
+        // key and new value from the event
+        Object key = cqEvent.getKey();
+        TradeOrder tradeOrder = (TradeOrder)cqEvent.getNewValue();
+        if (queryOperation.isUpdate())
+        {
+          // update data on the screen for the trade order . . .
+        }
+        else if (queryOperation.isCreate())
+        {
+          // add the trade order to the screen . . .
+        }
+        else if (queryOperation.isDestroy())
+        {
+          // remove the trade order from the screen . . .
+        }
+      }
+      public void onError(CqEvent cqEvent)
+      {
+        // handle the error
+      }
+      // From CacheCallback public void close()
+      {
+        // close the output screen for the trades . . .
+      }
+    }
+    ```
+
+    When you install the listener and run the query, your listener will handle 
all of the CQ results.
+
+4. If you need your CQs to detect whether they are connected to any of the 
servers that host its subscription queues, implement a `CqStatusListener` 
instead of a `CqListener`.
+    `CqStatusListener` extends the current `CqListener`, allowing a client to 
detect when a CQ is connected and/or disconnected from the server(s). The 
`onCqConnected()` method will be invoked when the CQ is connected, and when the 
CQ has been reconnected after being disconnected. The `onCqDisconnected()` 
method will be invoked when the CQ is no longer connected to any servers.
+
+    Taking the example from step 3, we can instead implement a 
`CqStatusListener`:
+
+    ``` pre
+    public class TradeEventListener implements CqStatusListener
+    {
+      public void onEvent(CqEvent cqEvent)
+      {
+        // org.apache.geode.cache Operation associated with the query op
+        Operation queryOperation = cqEvent.getQueryOperation();
+        // key and new value from the event
+        Object key = cqEvent.getKey();
+        TradeOrder tradeOrder = (TradeOrder)cqEvent.getNewValue();
+        if (queryOperation.isUpdate())
+        {
+          // update data on the screen for the trade order . . .
+        }
+        else if (queryOperation.isCreate())
+        {
+          // add the trade order to the screen . . .
+        }
+        else if (queryOperation.isDestroy())
+        {
+          // remove the trade order from the screen . . .
+        }
+      }
+      public void onError(CqEvent cqEvent)
+      {
+        // handle the error
+      }
+      // From CacheCallback public void close()
+      {
+        // close the output screen for the trades . . .
+      }
+
+      public void onCqConnected() {
+        //Display connected symbol
+      }
+
+      public void onCqDisconnected() {
+        //Display disconnected symbol
+      }
+    }
+    ```
+
+    When you install the `CqStatusListener`, your listener will be able to 
detect its connection status to the servers that it is querying.
+
+5. Program your client to run the CQ:
+    1. Create a `CqAttributesFactory` and use it to set your `CqListener`s and 
`CqStatusListener`.
+    2. Pass the attributes factory and the CQ query and its unique name to the 
`QueryService` to create a new `CqQuery`.
+    3. Start the query running by calling one of the execute methods on the 
`CqQuery` object.
+        You can execute with or without an initial result set.
+    4. When you are done with the CQ, close it.
+
+## Continuous Query Implementation
+
+``` pre
+// Get cache and queryService - refs to local cache and QueryService
+// Create client /tradeOrder region configured to talk to the server
+
+// Create CqAttribute using CqAttributeFactory
+CqAttributesFactory cqf = new CqAttributesFactory();
+
+// Create a listener and add it to the CQ attributes callback defined below
+CqListener tradeEventListener = new TradeEventListener();
+cqf.addCqListener(tradeEventListener);
+CqAttributes cqa = cqf.create();
+// Name of the CQ and its query
+String cqName = "priceTracker";
+String queryStr = "SELECT * FROM /tradeOrder t where t.price > 100.00";
+
+// Create the CqQuery
+CqQuery priceTracker = queryService.newCq(cqName, queryStr, cqa);
+
+try
+{  // Execute CQ, getting the optional initial result set
+   // Without the initial result set, the call is priceTracker.execute();
+  SelectResults sResults = priceTracker.executeWithInitialResults();
+  for (Object o : sResults) {
+       Struct s = (Struct) o;
+       TradeOrder to = (TradeOrder) s.get("value");
+       System.out.println("Intial result includes: " + to);
+  }
+}
+  catch (Exception ex)
+{
+  ex.printStackTrace();
+}
+// Now the CQ is running on the server, sending CqEvents to the listener
+. . .
+
+// End of life for the CQ - clear up resources by closing
+priceTracker.close();
+```
+
+With continuous queries, you can optionally implement:
+
+-   Highly available CQs by configuring your servers for high availability.
+-   Durable CQs by configuring your clients for durable messaging and 
indicating which CQs are durable at creation.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb
 
b/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb
new file mode 100644
index 0000000..45b154e
--- /dev/null
+++ 
b/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb
@@ -0,0 +1,23 @@
+---
+title:  Geode PDX Serialization Features
+---
+
+Geode PDX serialization offers several advantages in terms of functionality.
+
+## <a 
id="concept_F02E40517C4B42F2A75B133BB507C626__section_A0EEB4DA3E9F4EA4B65FE727D3951EA1"
 class="no-quick-link"></a>Application Versioning of PDX Domain Objects
+
+Domain objects evolve along with your application code. You might create an 
address object with two address lines, then realize later that a third line is 
required for some situations. Or you might realize that a particular field is 
not used and want to get rid of it. With PDX, you can use old and new versions 
of domain objects together in a distributed system if the versions differ by 
the addition or removal of fields. This compatibility lets you gradually 
introduce modified code and data into the system, without bringing the system 
down.
+
+Geode maintains a central registry of the PDX domain object metadata. Using 
the registry, Geode preserves fields in each member's cache regardless of 
whether the field is defined. When a member receives an object with a 
registered field that the member is not aware of, the member does not access 
the field, but preserves it and passes it along with the entire object to other 
members. When a member receives an object that is missing one or more fields 
according to the member's version, Geode assigns the Java default values for 
the field types to the missing fields.
+
+## <a 
id="concept_F02E40517C4B42F2A75B133BB507C626__section_D68A6A9C2C0C4D32AE7DADA2A4C3104D"
 class="no-quick-link"></a>Portability of PDX Serializable Objects
+
+When you serialize an object using PDX, Geode stores the object's type 
information in the central registry. The information is passed among clients 
and servers, peers, and distributed systems.
+
+This centralization of object type information is advantageous for 
client/server installations in which clients and servers are written in 
different languages. Clients pass registry information to servers automatically 
when they store a PDX serialized object. Clients can run queries and functions 
against the data in the servers without compatibility between server and the 
stored objects. One client can store data on the server to be retrieved by 
another client, with no requirements on the part of the server.
+
+## <a 
id="concept_F02E40517C4B42F2A75B133BB507C626__section_08C901A3CF3E438C8778F09D482B9A63"
 class="no-quick-link"></a>Reduced Deserialization of Serialized Objects
+
+The access methods of PDX serialized objects allow you to examine specific 
fields of your domain object without deserializing the entire object. Depending 
on your object usage, you can reduce serialization and deserialization costs 
significantly.
+
+Java and other clients can run queries and execute functions against the 
objects in the server caches without deserializing the entire object on the 
server side. The query engine automatically recognizes PDX objects, retrieves 
the `PdxInstance` of the object and uses only the fields it needs. Likewise, 
peers can access only the necessary fields from the serialized object, keeping 
the object stored in the cache in serialized form.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/data_serialization/auto_serialization.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/data_serialization/auto_serialization.html.md.erb 
b/geode-docs/developing/data_serialization/auto_serialization.html.md.erb
new file mode 100644
index 0000000..7e3dfa2
--- /dev/null
+++ b/geode-docs/developing/data_serialization/auto_serialization.html.md.erb
@@ -0,0 +1,124 @@
+---
+title:  Using Automatic Reflection-Based PDX Serialization
+---
+
+You can configure your cache to automatically serialize and deserialize domain 
objects without having to add any extra code to them.
+
+<a id="auto_serialization__section_E2B7719D3C1545808CC21E0FDBD2D610"></a>
+You can automatically serialize and deserialize domain objects without coding 
a `PdxSerializer` class. You do this by registering your domain objects with a 
custom `PdxSerializer` called `ReflectionBasedAutoSerializer` that uses Java 
reflection to infer which fields to serialize.
+
+You can also extend the ReflectionBasedAutoSerializer to customize its 
behavior. For example, you could add optimized serialization support for 
BigInteger and BigDecimal types. See [Extending the 
ReflectionBasedAutoSerializer](extending_the_autoserializer.html#concept_9E020566EE794A81A48A90BA798EC279)
 for details.
+
+**Note:**
+Your custom PDX autoserializable classes cannot use the `com.gemstone` 
package. If they do, the classes will be ignored by the PDX auto serializer.
+
+<a id="auto_serialization__section_C69046B44729454F8CD464B0289EFDD8"></a>
+
+**Prerequisites**
+
+-   Understand generally how to configure the Geode cache.
+-   Understand how PDX serialization works and how to configure your 
application to use `PdxSerializer`.
+
+<a 
id="auto_serialization__section_43F6E45FF69E470897FD9D002FBE896D"><strong>Procedure</strong></a>
+
+In your application where you manage data from the cache, provide the 
following configuration and code as appropriate:
+
+1.  In the domain classes that you wish to autoserialize, make sure each class 
has a zero-arg constructor. For example:
+
+    ``` pre
+    public PortfolioPdx(){}
+    ```
+
+2.  Using one of the following methods, set the PDX serializer to 
`ReflectionBasedAutoSerializer`.
+    1.  In gfsh, execute the following command prior to starting up any 
members that host data:
+
+        ``` pre
+        gfsh>configure pdx --auto-serializable-classes=com\.company\.domain\..*
+        ```
+
+        By using gfsh, this configuration can propagated across the cluster 
through the [Cluster Configuration 
Service](../../configuring/cluster_config/gfsh_persist.html).
+
+    2.  Alternately, in `cache.xml`:
+
+        ``` pre
+        <!-- Cache configuration configuring auto serialization behavior -->
+        <cache>
+          <pdx>
+            <pdx-serializer>
+              <class-name>
+               org.apache.geode.pdx.ReflectionBasedAutoSerializer
+              </class-name>
+              <parameter name="classes">
+              <string>com.company.domain.DomainObject</string>
+             </parameter>
+          </pdx-serializer>
+         </pdx>
+          ...
+        </cache>
+        ```
+
+        The parameter, `classes`, takes a comma-separated list of class 
patterns to define the domain classes to serialize. If your domain object is an 
aggregation of other domain classes, you need to register the domain object and 
each of those domain classes explicitly for the domain object to be serialized 
completely.
+
+    3.  Using the Java API:
+
+        ``` pre
+        Cache c = new CacheFactory()
+          .setPdxSerializer(new 
ReflectionBasedAutoSerializer("com.company.domain.DomainObject"))
+          .create();
+        ```
+
+3.  Customize the behavior of the `ReflectionBasedAutoSerializer` using one of 
the following mechanisms:
+    -   By using a class pattern string to specify the classes to 
auto-serialize and customize how the classes are serialized. Class pattern 
strings can be specified in the API by passing strings to the 
`ReflectionBasedAutoSerializer` constructor or by specifying them in cache.xml. 
See [Customizing Serialization with Class Pattern 
Strings](autoserialization_with_class_pattern_strings.html#concept_9B67BBE94B414B7EA63BD7E8D61D0312)
 for details.
+    -   By creating a subclass of `ReflectionBasedAutoSerializer` and 
overriding specific methods. See [Extending the 
ReflectionBasedAutoSerializer](extending_the_autoserializer.html#concept_9E020566EE794A81A48A90BA798EC279)
 for details.
+
+4.  If desired, configure the `ReflectionBasedAutoSerializer` to check the 
portability of the objects it is passed before it tries to autoserialize them. 
When this flag is set to true, the `ReflectionBasedAutoSerializer` will throw a 
`NonPortableClassException` error when trying to autoserialize a non-portable 
object. To set this, use the following configuration:
+    -   In gfsh, use the following command:
+
+        ``` pre
+        gfsh>configure pdx 
--portable-auto-serializable-classes=com\.company\.domain\..*
+        ```
+
+        By using gfsh, this configuration can propagated across the cluster 
through the [Cluster Configuration 
Service](../../configuring/cluster_config/gfsh_persist.html).
+    -   In cache.xml:
+
+        ``` pre
+        <!-- Cache configuration configuring auto serialization behavior -->
+        <cache>
+          <pdx>
+            <pdx-serializer>
+              <class-name>
+               org.apache.geode.pdx.ReflectionBasedAutoSerializer
+              </class-name>
+            <parameter name="classes">
+              <string>com.company.domain.DomainObject</string>
+            </parameter>
+            <parameter name="check-portability">
+              <string>true</string>
+            </parameter>
+          </pdx-serializer>
+         </pdx>
+          ...
+        </cache>
+        ```
+    -   Using the Java API:
+
+        ``` pre
+        Cache c = new CacheFactory()
+          .setPdxSerializer(new 
ReflectionBasedAutoSerializer(true,"com.company.domain.DomainObject"))
+          .create();
+        ```
+
+For each domain class you provide, all fields are considered for serialization 
except those defined as `static` or `transient` and those you explicitly 
exclude using the class pattern strings.
+
+**Note:**
+The `ReflectionBasedAutoSerializer` traverses the given domain object's class 
hierarchy to retrieve all fields to be considered for serialization. So if 
`DomainObjectB` inherits from `DomainObjectA`, you only need to register 
`DomainObjectB` to have all of `DomainObjectB` serialized.
+
+-   **[Customizing Serialization with Class Pattern 
Strings](../../developing/data_serialization/autoserialization_with_class_pattern_strings.html)**
+
+    Use class pattern strings to name the classes that you want to serialize 
using Geode's reflection-based autoserializer and to specify object identity 
fields and to specify fields to exclude from serialization.
+
+-   **[Extending the 
ReflectionBasedAutoSerializer](../../developing/data_serialization/extending_the_autoserializer.html)**
+
+    You can extend the `ReflectionBasedAutoSerializer` to handle serialization 
in a customized manner. This section provides an overview of the available 
method-based customization options and an example of extending the serializer 
to support BigDecimal and BigInteger types.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/data_serialization/autoserialization_with_class_pattern_strings.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/data_serialization/autoserialization_with_class_pattern_strings.html.md.erb
 
b/geode-docs/developing/data_serialization/autoserialization_with_class_pattern_strings.html.md.erb
new file mode 100644
index 0000000..ba18558
--- /dev/null
+++ 
b/geode-docs/developing/data_serialization/autoserialization_with_class_pattern_strings.html.md.erb
@@ -0,0 +1,68 @@
+---
+title:  Customizing Serialization with Class Pattern Strings
+---
+
+Use class pattern strings to name the classes that you want to serialize using 
Geode's reflection-based autoserializer and to specify object identity fields 
and to specify fields to exclude from serialization.
+
+The class pattern strings used to configured the 
`ReflectionBasedAutoSerializer` are standard regular expressions. For example, 
this expression would select all classes defined in the `com.company.domain` 
package and its subpackages:
+
+``` pre
+com\.company\.domain\..*
+```
+
+You can augment the pattern strings with a special notation to define fields 
to exclude from serialization and to define fields to mark as PDX identity 
fields. The full syntax of the pattern string is:
+
+``` pre
+<class pattern> [# (identity|exclude) = <field pattern>]... [, <class 
pattern>...]
+```
+
+The following example pattern string sets these PDX serialization criteria:
+
+-   Classes with names matching the pattern `com.company.DomainObject.*` are 
serialized. In those classes, fields beginning with `id` are marked as identity 
fields and fields named `creationDate` are not serialized.
+-   The class `com.company.special.Patient` is serialized. In the class, the 
field, `ssn` is marked as an identity field
+
+``` pre
+com.company.DomainObject.*#identity=id.*#exclude=creationDate, 
+com.company.special.Patient#identity=ssn
+```
+
+**Note:**
+There is no association between the `identity` and `exclude` options, so the 
pattern above could also be expressed as:
+
+``` pre
+com.company.DomainObject.*#identity=id.*, 
com.company.DomainObject.*#exclude=creationDate, 
+com.company.special.Patient#identity=ssn
+            
+```
+
+**Note:**
+The order of the patterns is not relevant. All defined class patterns are used 
when determining whether a field should be considered as an identity field or 
should be excluded.
+
+Examples:
+
+-   This XML uses the example pattern shown above:
+
+    ``` pre
+    <parameter name="classes">
+      <string>com.company.DomainObject.*#identity=id.*#exclude=creationDate, 
+    com.company.special.Patient#identity=ssn</string>
+    </parameter>
+    ```
+
+-   This application code sets the same pattern:
+
+    ``` pre
+    
classPatterns.add("com.company.DomainObject.*#identity=id.*#exclude=creationDate,
+       com.company.special.Patient#identity=ssn");
+    ```
+
+-   This application code has the same effect:
+
+    ``` pre
+     Cache c = new CacheFactory().set("cache-xml-file", cacheXmlFileName)
+         .setPdxSerializer(new 
ReflectionBasedAutoSerializer("com.foo.DomainObject*#identity=id.*",
+             
"com.company.DomainObject.*#exclude=creationDate","com.company.special.Patient#identity=ssn"))
+         .create();
+    ```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/data_serialization/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/data_serialization/chapter_overview.html.md.erb 
b/geode-docs/developing/data_serialization/chapter_overview.html.md.erb
new file mode 100644
index 0000000..5ef0877
--- /dev/null
+++ b/geode-docs/developing/data_serialization/chapter_overview.html.md.erb
@@ -0,0 +1,23 @@
+---
+title:  Data Serialization
+---
+
+Data that you manage in Geode must be serialized and deserialized for storage 
and transmittal between processes. You can choose among several options for 
data serialization.
+
+-   **[Overview of Data 
Serialization](../../developing/data_serialization/data_serialization_options.html)**
+
+    Geode offers serialization options other than Java serialization that give 
you higher performance and greater flexibility for data storage, transfers, and 
language types.
+
+-   **[Geode PDX 
Serialization](../../developing/data_serialization/gemfire_pdx_serialization.html)**
+
+    Geode's Portable Data eXchange (PDX) is a cross-language data format that 
can reduce the cost of distributing and serializing your objects. PDX stores 
data in named fields that you can access individually, to avoid the cost of 
deserializing the entire data object. PDX also allows you to mix versions of 
objects where you have added or removed fields.
+
+-   **[Geode Data Serialization (DataSerializable and 
DataSerializer)](../../developing/data_serialization/gemfire_data_serialization.html)**
+
+    Geode's `DataSerializable` interface gives you quick serialization of your 
objects.
+
+-   **[Standard Java 
Serialization](../../developing/data_serialization/java_serialization.html)**
+
+    You can use standard Java serialization for data you only distribute 
between Java applications. If you distribute your data between non-Java clients 
and Java servers, you need to do additional programming to get the data between 
the various class formats.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/developing/data_serialization/data_serialization_options.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/data_serialization/data_serialization_options.html.md.erb
 
b/geode-docs/developing/data_serialization/data_serialization_options.html.md.erb
new file mode 100644
index 0000000..7402ee7
--- /dev/null
+++ 
b/geode-docs/developing/data_serialization/data_serialization_options.html.md.erb
@@ -0,0 +1,51 @@
+---
+title:  Overview of Data Serialization
+---
+
+Geode offers serialization options other than Java serialization that give you 
higher performance and greater flexibility for data storage, transfers, and 
language types.
+
+<a 
id="data_serialization_options__section_B1BDB0E7F6814DFD8BACD8D8C5CAA81B"></a>
+All data that Geode moves out of the local cache must be serializable. 
However, you do not necessarily need to implement `java.io.Serializable` since 
other serialization options are available in Geode. Region data that must be 
serializable falls under the following categories:
+
+-   Partitioned regions
+-   Distributed regions
+-   Regions that are persisted or overflowed to disk
+-   Server or client regions in a client/server installation
+-   Regions configured with a gateway sender for distributing events in a 
multi-site installation
+-   Regions that receive events from remote caches
+-   Regions that provide function arguments and results
+
+**Note:**
+If you are storing objects with the [HTTP Session Management 
Modules](../../tools_modules/http_session_mgmt/chapter_overview.html), these 
objects must be serializable since they are serialized before being stored in 
the region.
+
+To minimize the cost of serialization and deserialization, Geode avoids 
changing the data format whenever possible. This means your data might be 
stored in the cache in serialized or deserialized form, depending on how you 
use it. For example, if a server acts only as a storage location for data 
distribution between clients, it makes sense to leave the data in serialized 
form, ready to be transmitted to clients that request it. Partitioned region 
data is always initially stored in serialized form.
+
+## <a 
id="data_serialization_options__section_691C2CF5A4E24D599070A7AADEDF2BEC" 
class="no-quick-link"></a>Data Serialization Options
+
+<a 
id="data_serialization_options__section_44CC2DEEDA0F41D49D416ABA921A6436"></a>
+
+With Geode, you have the option to serialize your domain objects automatically 
or to implement serialization using one of Geode's interfaces. Enabling 
automatic serialization means that domain objects are serialized and 
deserialized without your having to make any code changes to those objects. 
This automatic serialization is performed by registering your domain objects 
with a custom `PdxSerializer` called the `ReflectionBasedAutoSerializer`, which 
uses Java reflection to infer which fields to serialize.
+
+If autoserialization does not meet your needs, you can serialize your objects 
by implementing one of the Geode interfaces, `PdxSerializable` or 
`DataSerializable`. You can use these interfaces to replace any standard Java 
data serialization for better performance. If you cannot or do not want to 
modify your domain classes, each interface has an alternate serializer class, 
`PdxSerializer` and `DataSerializer`. To use these, you create your custom 
serializer class and then associate it with your domain class in the Geode 
cache configuration.
+
+Geode Data serialization is about 25% faster than PDX serialization, however 
using PDX serialization will help you to avoid the even larger costs of 
performing deserialization.
+
+<a 
id="data_serialization_options__section_993B4A298874459BB4A8A0A9811854D9"></a><a
 
id="data_serialization_options__table_ccf00c9f-9b98-47f7-ab30-3d23ecaff0a1"></a>
+
+| Capability                                                                   
                                                    | Geode Data Serializable | 
Geode PDX Serializable |
+|----------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------|-----------------------------------------------------|
+| Implements Java Serializable.                                                
                                                    | X                         
                           |                                                   
  |
+| Handles multiple versions of application domain objects, providing the 
versions differ by the addition or subtraction of fields. |                    
                                  | X                                           
        |
+| Provides single field access of serialized data, without full 
deserialization - supported also for OQL querying.                 |           
                                           | X                                  
                 |
+| Automatically ported to other languages by Geode                             
                       |                                                      
| X                                                   |
+| Works with .NET clients.                                                     
                                                    | X                         
                           | X                                                  
 |
+| Works with C++ clients.                                                      
                                                   | X                          
                          | X                                                   
|
+| Works with Geode delta propagation.                                          
                       | X                                                    | 
X (See note below.)                                 |
+
+<span class="tablecap">**Table 1.** Serialization Options: Comparison of 
Features</span>
+
+**Note:** By default, you can use Geode delta propagation with PDX 
serialization. However, delta propagation will not work if you have set the 
Geode property `read-serialized` to "true". In terms of deserialization, to 
apply a change delta propagation requires a domain class instance and the 
`fromDelta `method. If you have set `read-serialized` to true, then you will 
receive a `PdxInstance` instead of a domain class instance and `PdxInstance` 
does not have the `fromDelta` method required for delta propagation.
+
+## <a 
id="data_serialization_options__section_D90C2C09B95C40B6803CF202CF8008BF" 
class="no-quick-link"></a>Differences between Geode Serialization (PDX or Data 
Serializable) and Java Serialization
+
+Geode serialization (either PDX Serialization or Data Serialization) does not 
support circular object graphs whereas Java serialization does. In Geode 
serialization, if the same object is referenced more than once in an object 
graph, the object is serialized for each reference, and deserialization 
produces multiple copies of the object. By contrast in this situation, Java 
serialization serializes the object once and when deserializing the object, it 
produces one instance of the object with multiple references.

Reply via email to