http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/delta_propagation/implementing_delta_propagation.html.md.erb
----------------------------------------------------------------------
diff --git 
a/developing/delta_propagation/implementing_delta_propagation.html.md.erb 
b/developing/delta_propagation/implementing_delta_propagation.html.md.erb
deleted file mode 100644
index 7727532..0000000
--- a/developing/delta_propagation/implementing_delta_propagation.html.md.erb
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title:  Implementing Delta Propagation
----
-
-By default, delta propagation is enabled in your distributed system. When 
enabled, delta propagation is used for objects that implement 
`org.apache.geode.Delta`. You program the methods to store and extract delta 
information for your entries and to apply received delta information.
-
-<a 
id="implementing_delta_propagation__section_877AC61D691C44078A782683F90D169B"></a>
-Use the following procedure to implement delta propagation in your distributed 
system.
-
-1.  Study your object types and expected application behavior to determine 
which regions can benefit from using delta propagation. Delta propagation does 
not improve performance for all data and data modification scenarios. See [When 
to Avoid Delta Propagation](when_to_use_delta_prop.html#when_to_use_delta_prop).
-2.  For each region where you are using delta propagation, choose whether to 
enable cloning using the delta propagation property `cloning-enabled`. Cloning 
is disabled by default. See [Delta Propagation 
Properties](delta_propagation_properties.html#delta_propagation_properties).
-3.  If you do not enable cloning, review all associated listener code for 
dependencies on `EntryEvent.getOldValue`. Without cloning, Geode modifies the 
entry in place and so loses its reference to the old value. For delta events, 
the `EntryEvent` methods `getOldValue` and `getNewValue` both return the new 
value.
-4.  For every class where you want delta propagation, implement 
`org.apache.geode.Delta` and update your methods to support delta propagation. 
Exactly how you do this depends on your application and object needs, but these 
steps describe the basic approach:
-    1.  If the class is a plain old Java object (POJO), wrap it for this 
implementation and update your code to work with the wrapper class.
-    2.  Define as transient any extra object fields that you use to manage 
delta state. This can help performance when the full object is distributed. 
Whenever standard Java serialization is used, the transient keyword indicates 
to Java to not serialize the field.
-    3.  Study the object contents to decide how to handle delta changes. Delta 
propagation has the same issues of distributed concurrency control as the 
distribution of full objects, but on a more detailed level. Some parts of your 
objects may be able to change independent of one another while others may 
always need to change together. Send deltas large enough to keep your data 
logically consistent. If, for example, field A and field B depend on each 
other, then your delta distributions should either update both fields or 
neither. As with regular updates, the fewer producers you have on a data 
region, the lower your likelihood of concurrency issues.
-    4.  In the application code that puts entries, put the fully populated 
object into the local cache. Even though you are planning to send only deltas, 
errors on the receiving end could cause Geode to request the full object, so 
you must provide it to the originating put method. Do this even in empty 
producers, with regions configured for no local data storage. This usually 
means doing a get on the entry unless you are sure it does not already exist 
anywhere in the distributed region.
-    5.  Change each field's update method to record information about the 
update. The information must be sufficient for `toDelta` to encode the delta 
and any additional required delta information when it is invoked.
-    6.  Write `hasDelta` to report on whether a delta is available.
-    7.  Write `toDelta` to create a byte stream with the changes to the object 
and any other information `fromDelta` will need to apply the changes. Before 
returning from `toDelta`, reset your delta state to indicate that there are no 
delta changes waiting to be sent.
-    8.  Write `fromDelta` to decode the byte stream that `toDelta` creates and 
update the object.
-    9.  Make sure you provide adequate synchronization to your object to 
maintain a consistent object state. If you do not use cloning, you will 
probably need to synchronize on reads and writes to avoid reading partially 
written updates from the cache.This synchronization might involve `toDelta`, 
`fromDelta`, `toData`, `fromData`, and other methods that access or update the 
object. Additionally, your implementation should take into account the 
possibility of concurrent invocations of `fromDelta` and one or more of the 
object's update methods.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/delta_propagation/when_to_use_delta_prop.html.md.erb
----------------------------------------------------------------------
diff --git a/developing/delta_propagation/when_to_use_delta_prop.html.md.erb 
b/developing/delta_propagation/when_to_use_delta_prop.html.md.erb
deleted file mode 100644
index 47de0ba..0000000
--- a/developing/delta_propagation/when_to_use_delta_prop.html.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title:  When to Avoid Delta Propagation
----
-
-Generally, the larger your objects and the smaller the deltas, the greater the 
benefits of using delta propagation. Partitioned regions with higher redundancy 
levels generally benefit more from delta propagation. However, in some 
application scenarios, delta propagation does not show any significant 
benefits. On occasion it results in performance degradation.
-
-<a id="when_to_use_delta_prop__section_83BA84BB08194FC58F2BCE149AA0F0EC"></a>
-By default, delta propagation is enabled in your distributed system.
-
-These are the main factors that can reduce the performance benefits of using 
delta propagation:
-
--   The added costs of deserializing your objects to apply deltas. Applying a 
delta requires the entry value to be deserialized. Once this is done, the 
object is stored back in the cache in deserialized form. This aspect of delta 
propagation only negatively impacts your system if your objects are not already 
being deserialized for other reasons, such as for indexing and querying or for 
listener operations. Once stored in deserialized form, there are 
reserialization costs for operations that send the object outside of the 
member, like distribution from a gateway sender, values sent in response to 
`netSearch` or client requests, and storage to disk. The more operations that 
require reserialization, the higher the overhead of deserializing the object. 
As with all serialization efforts, you can improve performance in serialization 
and deserialization by providing custom implementations of `DataSerializable` 
for your objects.
--   Cloning when applying the delta. Using cloning can affect performance and 
generates extra garbage. Not using cloning is risky however, as you are 
modifying cached values in place. Without cloning, make sure you synchronize 
your entry access to keep your cache from becoming inconsistent.
--   Problems applying the delta that cause the system to go back to the 
originator for the full entry value. When this happens, the overall operation 
costs more than sending the full entry value in the first place. This can be 
additionally aggravated if your delta is sent to a number of recipients, all or 
most of them request a full value, and the full value send requires the object 
to be serialized.
--   Disk I/O costs associated with overflow regions. If you use eviction with 
overflow to disk, on-disk values must be brought into memory in order to apply 
the delta. This is much more costly than just removing the reference to the 
disk copy, as you would do with a full value distribution into the cache.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/distributed_regions/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/developing/distributed_regions/chapter_overview.html.md.erb 
b/developing/distributed_regions/chapter_overview.html.md.erb
deleted file mode 100644
index ce33ee2..0000000
--- a/developing/distributed_regions/chapter_overview.html.md.erb
+++ /dev/null
@@ -1,27 +0,0 @@
----
-title:  Distributed and Replicated Regions
----
-
-In addition to basic region management, distributed and replicated regions 
include options for things like push and pull distribution models, global 
locking, and region entry versions to ensure consistency across Geode members.
-
--   **[How Distribution 
Works](../../developing/distributed_regions/how_distribution_works.html)**
-
-    To use distributed and replicated regions, you should understand how they 
work and your options for managing them.
-
--   **[Options for Region 
Distribution](../../developing/distributed_regions/choosing_level_of_dist.html)**
-
-    You can use distribution with and without acknowledgment, or global 
locking for your region distribution. Regions that are configured for 
distribution with acknowledgment can also be configured to resolve concurrent 
updates consistently across all Geode members that host the region.
-
--   **[How Replication and Preloading 
Work](../../developing/distributed_regions/how_replication_works.html)**
-
-    To work with replicated and preloaded regions, you should understand how 
their data is initialized and maintained in the cache.
-
--   **[Configure Distributed, Replicated, and Preloaded 
Regions](../../developing/distributed_regions/managing_distributed_regions.html)**
-
-    Plan the configuration and ongoing management of your distributed, 
replicated, and preloaded regions, and configure the regions.
-
--   **[Locking in Global 
Regions](../../developing/distributed_regions/locking_in_global_regions.html)**
-
-    In global regions, the system locks entries and the region during updates. 
You can also explicitly lock the region and its entries as needed by your 
application. Locking includes system settings that help you optimize 
performance and locking behavior between your members.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/distributed_regions/choosing_level_of_dist.html.md.erb
----------------------------------------------------------------------
diff --git a/developing/distributed_regions/choosing_level_of_dist.html.md.erb 
b/developing/distributed_regions/choosing_level_of_dist.html.md.erb
deleted file mode 100644
index f48aaeb..0000000
--- a/developing/distributed_regions/choosing_level_of_dist.html.md.erb
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title:  Options for Region Distribution
----
-
-You can use distribution with and without acknowledgment, or global locking 
for your region distribution. Regions that are configured for distribution with 
acknowledgment can also be configured to resolve concurrent updates 
consistently across all Geode members that host the region.
-
-<a id="choosing_level_of_dist__section_F2528B151DD54CEFA05C4BA655BCF016"></a>
-Each distributed region must have the same scope and concurrency checking 
setting throughout the distributed system.
-
-Distributed scope is provided at three levels:
-
--   **distributed-no-ack**. Distribution operations return without waiting for 
a response from other caches. This scope provides the best performance and uses 
the least amount of overhead, but it is also most prone to having 
inconsistencies caused by network problems. For example, a temporary disruption 
of the network transport layer could cause a failure in distributing updates to 
a cache on a remote machine, while the local cache continues being updated.
--   **distributed-ack**. Distribution waits for acknowledgment from other 
caches before continuing. This is slower than `distributed-no-ack`, but covers 
simple communication problems such as temporary network disruptions.
-
-    In systems where there are many `distributed-no-ack` operations, it is 
possible for `distributed-ack` operations to take a long time to complete. The 
distributed system has a configurable time to wait for acknowledgment to any 
`distributed-ack` message before sending alerts to the logs about a possible 
problem with the unresponsive member. No matter how long the wait, the sender 
keeps waiting in order to honor the distributed-ack region setting. The 
`gemfire.properties` attribute governing this is `ack-wait-threshold`.
-
--   **global**. Entries and regions are automatically locked across the 
distributed system during distribution operations. All load, create, put, 
invalidate, and destroy operations on the region and its entries are performed 
with a distributed lock. The global scope enforces strict consistency across 
the distributed system, but it is the slowest mechanism for achieving 
consistency. In addition to the implicit locking performed by distribution 
operations, regions with global scope and their contents can be explicitly 
locked through the application APIs. This allows applications to perform 
atomic, multi-step operations on regions and region entries.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/distributed_regions/how_distribution_works.html.md.erb
----------------------------------------------------------------------
diff --git a/developing/distributed_regions/how_distribution_works.html.md.erb 
b/developing/distributed_regions/how_distribution_works.html.md.erb
deleted file mode 100644
index bbc7522..0000000
--- a/developing/distributed_regions/how_distribution_works.html.md.erb
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title:  How Distribution Works
----
-
-To use distributed and replicated regions, you should understand how they work 
and your options for managing them.
-
-<a id="how_distribution_works__section_2F892A4987C547E68CA78067133C2C2C"></a>
-**Note:**
-The management of replicated and distributed regions supplements the general 
information for managing data regions provided in [Basic Configuration and 
Programming](../../basic_config/book_intro.html). See also 
`org.apache.geode.cache.PartitionAttributes`.
-
-A distributed region automatically sends entry value updates to remote caches 
and receives updates from them.
-
--   Distributed entry updates come from the `Region` `put` and `create` 
operations (the creation of an entry with a non-null value is seen as an update 
by remote caches that already have the entry key). Entry updates are 
distributed selectively - only to caches where the entry key is already 
defined. This provides a pull model of distribution, compared to the push model 
that you get with replication.
--   Distribution alone does not cause new entries to be copied from remote 
caches.
--   A distributed region shares cache loader and cache writer application 
event handler plug-ins across the distributed system.
-
-In a distributed region, new and updated entry values are automatically 
distributed to remote caches that already have the entries defined.
-
-**Step 1:** The application updates or creates the entry. At this point, the 
entry in the M1 cache may not yet exist.
-
-<img src="../../images_svg/distributed_how_1.svg" 
id="how_distribution_works__image_40EFE6E95E6945A1B08A68508ECBCC60" 
class="image" />
-
-**Step 2:** The new value is automatically distributed to caches holding the 
entry.
-
-<img src="../../images_svg/distributed_how_2.svg" 
id="how_distribution_works__image_AF8A3ADEB5D94E20B101FDA92BF6D002" 
class="image" />
-
-**Step 3:** The entry's value is the same throughout the distributed system.
-
-<img src="../../images_svg/distributed_how_3.svg" 
id="how_distribution_works__image_5B1F06B54C9047E28A8C8673D1D5BD27" 
class="image" />
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/distributed_regions/how_region_versioning_works.html.md.erb
----------------------------------------------------------------------
diff --git 
a/developing/distributed_regions/how_region_versioning_works.html.md.erb 
b/developing/distributed_regions/how_region_versioning_works.html.md.erb
deleted file mode 100644
index 7e4c551..0000000
--- a/developing/distributed_regions/how_region_versioning_works.html.md.erb
+++ /dev/null
@@ -1,110 +0,0 @@
----
-title: Consistency Checking by Region Type
----
-
-<a id="topic_7A4B6C6169BD4B1ABD356294F744D236"></a>
-
-Geode performs different consistency checks depending on the type of region 
you have configured.
-
-## <a 
id="topic_7A4B6C6169BD4B1ABD356294F744D236__section_B090F5FB87D84104A7BE4BCEA6BAE6B7"
 class="no-quick-link"></a>Partitioned Region Consistency
-
-For a partitioned region, Geode maintains consistency by routing all updates 
on a given key to the Geode member that holds the primary copy of that key. 
That member holds a lock on the key while distributing updates to other members 
that host a copy of the key. Because all updates to a partitioned region are 
serialized on the primary Geode member, all members apply the updates in the 
same order and consistency is maintained at all times. See [Understanding 
Partitioning](../partitioned_regions/how_partitioning_works.html).
-
-## <a 
id="topic_7A4B6C6169BD4B1ABD356294F744D236__section_72DFB366C8F14ADBAF2A136669ECAB1E"
 class="no-quick-link"></a>Replicated Region Consistency
-
-For a replicated region, any member that hosts the region can update a key and 
distribute that update to other members without locking the key. It is possible 
that two members can update the same key at the same time (a concurrent 
update). It is also possible that, due to network latency, an update in one 
member is distributed to other members at a later time, after those members 
have already applied more recent updates to the key (an out-of-order update). 
By default, Geode members perform conflict checking before applying region 
updates in order to detect and consistently resolve concurrent and out-of-order 
updates. Conflict checking ensures that region data eventually becomes 
consistent on all members that host the region. The conflict checking behavior 
for replicated regions is summarized as follows:
-
--   If two members update the same key at the same time, conflict checking 
ensures that all members eventually apply the same value, which is the value of 
one of the two concurrent updates.
--   If a member receives an out-of-order update (an update that is received 
after one or more recent updates were applied), conflict checking ensures that 
the out-of-order update is discarded and not applied to the cache.
-
-[How Consistency Checking Works for Replicated 
Regions](#topic_C5B74CCDD909403C815639339AA03758) and [How Destroy and Clear 
Operations Are Resolved](#topic_321B05044B6641FCAEFABBF5066BD399) provide more 
details about how Geode performs conflict checking when applying an update.
-
-## <a 
id="topic_7A4B6C6169BD4B1ABD356294F744D236__section_313045F430EE459CB411CAAE7B00F3D8"
 class="no-quick-link"></a>Non-Replicated Regions and Client Cache Consistency
-
-When a member receives an update for an entry in a non-replicated region and 
applies an update, it performs conflict checking in the same way as for a 
replicated region. However, if the member initiates an operation on an entry 
that is not present in the region, it first passes that operation to a member 
that hosts a replicate. The member that hosts the replica generates and 
provides the version information necessary for subsequent conflict checking. 
See [How Consistency Checking Works for Replicated 
Regions](#topic_C5B74CCDD909403C815639339AA03758).
-
-Client caches also perform consistency checking in the same way when they 
receive an update for a region entry. However, all region operations that 
originate in the client cache are first passed onto an available Geode server, 
which generates the version information necessary for subsequent conflict 
checking.
-
-## <a id="topic_B64891585E7F4358A633C792F10FA23E" 
class="no-quick-link"></a>Configuring Consistency Checking
-
-Geode enables consistency checking by default. You cannot disable consistency 
checking for persistent regions. For all other regions, you can explicitly 
enable or disable consistency checking by setting the 
`concurrency-checks-enabled` region attribute in `cache.xml` to "true" or 
"false."
-
-All Geode members that host a region must use the same 
`concurrency-checks-enabled` setting for that region.
-
-A client cache can disable consistency checking for a region even if server 
caches enable consistency checking for the same region. This configuration 
ensures that the client sees all events for the region, but it does not prevent 
the client cache region from becoming out-of-sync with the server cache.
-
-See 
[&lt;region-attributes&gt;](../../reference/topics/cache_xml.html#region-attributes).
-
-**Note:**
-Regions that do not enable consistency checking remain subject to race 
conditions. Concurrent updates may result in one or more members having 
different values for the same key. Network latency can result in older updates 
being applied to a key after more recent updates have occurred.
-
-## <a id="topic_0BDACA590B2C4974AC9C450397FE70B2" 
class="no-quick-link"></a>Overhead for Consistency Checks
-
-Consistency checking requires additional overhead for storing and distributing 
version and timestamp information, as well as for maintaining destroyed entries 
for a period of time to meet consistency requirements.
-
-To provide consistency checking, each region entry uses an additional 16 
bytes. When an entry is deleted, a tombstone entry of approximately 13 bytes is 
created and maintained until the tombstone expires or is garbage-collected in 
the member. (When an entry is destroyed, the member temporarily retains the 
entry with its current version stamp to detect possible conflicts with 
operations that have occurred. The retained entry is referred to as a 
tombstone.) See [How Destroy and Clear Operations Are 
Resolved](#topic_321B05044B6641FCAEFABBF5066BD399).
-
-If you cannot support the additional overhead in your deployment, you can 
disable consistency checks by setting `concurrency-checks-enabled` to "false" 
for each region. See [Consistency for Region 
Updates](region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).
-
-## <a id="topic_C5B74CCDD909403C815639339AA03758" 
class="no-quick-link"></a>How Consistency Checking Works for Replicated Regions
-
-Each region stores version and timestamp information for use in conflict 
detection. Geode members use the recorded information to detect and resolve 
conflicts consistently before applying a distributed update.
-
-<a 
id="topic_C5B74CCDD909403C815639339AA03758__section_763B071061C94D1E82E8883325294547"></a>
-By default, each entry in a region stores the ID of the Geode member that last 
updated the entry, as well as a version stamp for the entry that is incremented 
each time an update occurs. The version information is stored in each local 
entry, and the version stamp is distributed to other Geode members when the 
local entry is updated.
-
-A Geode member or client that receives an update message first compares the 
update version stamp with the version stamp recorded in its local cache. If the 
update version stamp is larger, it represents a newer version of the entry, so 
the receiving member applies the update locally and updates the version 
information. A smaller update version stamp indicates an out-of-order update, 
which is discarded.
-
-An identical version stamp indicates that multiple Geode members updated the 
same entry at the same time. To resolve a concurrent update, a Geode member 
always applies (or keeps) the region entry that has the highest membership ID; 
the region entry having the lower membership ID is discarded.
-
-**Note:**
-When a Geode member discards an update message (either for an out-of-order 
update or when resolving a concurrent update), it does not pass the discarded 
event to an event listener for the region. You can track the number of 
discarded updates for each member using the `conflatedEvents` statistic. See 
[Geode Statistics 
List](../../reference/statistics/statistics_list.html#statistics_list). Some 
members may discard an update while other members apply the update, depending 
on the order in which each member receives the update. For this reason, the 
`conflatedEvents` statistic differs for each Geode member. The example below 
describes this behavior in more detail.
-
-The following example shows how a concurrent update is handled in a 
distributed system of three Geode members. Assume that Members A, B, and C have 
membership IDs of 1, 2, and 3, respectively. Each member currently stores an 
entry, X, in their caches at version C2 (the entry was last updated by member 
C):
-
-**Step 1:** An application updates entry X on Geode member A at the same time 
another application updates entry X on member C. Each member increments the 
version stamp for the entry and records the version stamp with their member ID 
in their local caches. In this case the entry was originally at version C2, so 
each member updates the version to 3 (A3 and C3, respectively) in their local 
caches.
-
-<img src="../../images_svg/region_entry_versions_1.svg" 
id="topic_C5B74CCDD909403C815639339AA03758__image_nt5_ptw_4r" class="image" />
-
-**Step 2:** Member A distributes its update message to members B and C.
-
-Member B compares the update version stamp (3) to its recorded version stamp 
(2) and applies the update to its local cache as version A3. In this member, 
the update is applied for the time being, and passed on to configured event 
listeners.
-
-Member C compares the update version stamp (3) to its recorded version stamp 
(3) and identifies a concurrent update. To resolve the conflict, member C next 
compares the membership ID of the update to the membership ID stored in its 
local cache. Because the distributed system ID the update (A3) is lower than 
the ID stored in the cache (C3), member C discards the update (and increments 
the `conflatedEvents` statistic).
-
-<img src="../../images_svg/region_entry_versions_2.svg" 
id="topic_C5B74CCDD909403C815639339AA03758__image_ocs_35b_pr" class="image" />
-
-**Step 3:** Member C distributes the update message to members A and B.
-
-Members A and B compare the update version stamp (3) to their recorded version 
stamps (3) and identify the concurrent update. To resolve the conflict, both 
members compare the membership ID of the update with the membership ID stored 
in their local caches. Because the distributed system ID of A in the cache 
value is less than the ID of C in the update, both members record the update C3 
in their local caches, overwriting the previous value.
-
-At this point, all members that host the region have achieved a consistent 
state for the concurrent updates on members A and C.
-
-<img src="../../images_svg/region_entry_versions_3.svg" 
id="topic_C5B74CCDD909403C815639339AA03758__image_gsv_k5b_pr" class="image" />
-
-## <a id="topic_321B05044B6641FCAEFABBF5066BD399" 
class="no-quick-link"></a>How Destroy and Clear Operations Are Resolved
-
-When consistency checking is enabled for a region, a Geode member does not 
immediately remove an entry from the region when an application destroys the 
entry. Instead, the member retains the entry with its current version stamp for 
a period of time in order to detect possible conflicts with operations that 
have occurred. The retained entry is referred to as a *tombstone*. Geode 
retains tombstones for partitioned regions and non-replicated regions as well 
as for replicated regions, in order to provide consistency.
-
-A tombstone in a client cache or a non-replicated region expires after 8 
minutes, at which point the tombstone is immediately removed from the cache.
-
-A tombstone for a replicated or partitioned region expires after 10 minutes. 
Expired tombstones are eligible for garbage collection by the Geode member. 
Garbage collection is automatically triggered after 100,000 tombstones of any 
type have timed out in the local Geode member. You can optionally set the 
`gemfire.tombstone-gc-threshold` property to a value smaller than 100000 to 
perform garbage collection more frequently.
-
-**Note:**
-To avoid out-of-memory errors, a Geode member also initiates garbage 
collection for tombstones when the amount of free memory drops below 30 percent 
of total memory.
-
-You can monitor the total number of tombstones in a cache using the 
`tombstoneCount` statistic in `CachePerfStats`. The `tombstoneGCCount` 
statistic records the total number of tombstone garbage collection cycles that 
a member has performed. `replicatedTombstonesSize` and 
`nonReplicatedTombstonesSize` show the approximate number of bytes that are 
currently consumed by tombstones in replicated or partitioned regions, and in 
non-replicated regions, respectively. See [Geode Statistics 
List](../../reference/statistics/statistics_list.html#statistics_list).
-
-## <a 
id="topic_321B05044B6641FCAEFABBF5066BD399__section_4D0140E96A3141EB8D983D0A43464097"
 class="no-quick-link"></a>About Region.clear() Operations
-
-Region entry version stamps and tombstones ensure consistency only when 
individual entries are destroyed. A `Region.clear()` operation, however, 
operates on all entries in a region at once. To provide consistency for 
`Region.clear()` operations, Geode obtains a distributed read/write lock for 
the region, which blocks all concurrent updates to the region. Any updates that 
were initiated before the clear operation are allowed to complete before the 
region is cleared.
-
-## <a id="topic_32ACFA5542C74F3583ECD30467F352B0" 
class="no-quick-link"></a>Transactions with Consistent Regions
-
-A transaction that modifies a region having consistency checking enabled 
generates all necessary version information for region updates when the 
transaction commits.
-
-If a transaction modifies a normal, preloaded or empty region, the transaction 
is first delegated to a Geode member that holds a replicate for the region. 
This behavior is similar to the transactional behavior for partitioned regions, 
where the partitioned region transaction is forwarded to a member that hosts 
the primary for the partitioned region update.
-
-The limitation for transactions on normal, preloaded or or empty regions is 
that, when consistency checking is enabled, a transaction cannot perform a 
`localDestroy` or `localInvalidate` operation against the region. Geode throws 
an `UnsupportedOperationInTransactionException` exception in such cases. An 
application should use a `Destroy` or `Invalidate` operation in place of a 
`localDestroy` or `localInvalidate` when consistency checks are enabled.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb
----------------------------------------------------------------------
diff --git 
a/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb 
b/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb
deleted file mode 100644
index 0ce2f04..0000000
--- a/developing/distributed_regions/how_region_versioning_works_wan.html.md.erb
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title:  How Consistency Is Achieved in WAN Deployments
----
-
-When two or more Geode systems are configured to distribute events over a WAN, 
each system performs local consistency checking before it distributes an event 
to a configured gateway sender. Discarded events are not distributed across the 
WAN.
-
-Regions can also be configured to distribute updates to other Geode clusters 
over a WAN. With a distributed WAN configuration, multiple gateway senders 
asynchronously queue and send region updates to another Geode cluster. It is 
possible for multiple sites to send updates to the same region entry at the 
same time. It is also possible that, due to a slow WAN connection, a cluster 
might receive region updates after a considerable delay, and after it has 
applied more recent updates to a region. To ensure that WAN-replicated regions 
eventually reach a consistent state, Geode first ensures that each cluster 
performs consistency checking to regions before queuing updates to a gateway 
sender for WAN distribution. In order words, region conflicts are first 
detected and resolved in the local cluster, using the techniques described in 
the previous sections.
-
-When a Geode cluster in a WAN configuration receives a distributed update, 
conflict checking is performed to ensure that all sites apply updates in the 
same way. This ensures that regions eventually reach a consistent state across 
all Geode clusters. The default conflict checking behavior for WAN-replicated 
regions is summarized as follows:
-
--   If an update is received from the same Geode cluster that last updated the 
region entry, then there is no conflict and the update is applied.
--   If an update is received from a different Geode cluster than the one that 
last updated the region entry, then a potential conflict exists. A cluster 
applies the update only when the update has a timestamp that is later than the 
timestamp currently recorded in the cache.
-
-**Note:**
-If you use the default conflict checking feature for WAN deployments, you must 
ensure that all Geode members in all clusters synchronize their system clocks. 
For example, use a common NTP server for all Geode members that participate in 
a WAN deployment.
-
-As an alternative to the default conflict checking behavior for WAN 
deployments, you can develop and deploy a custom conflict resolver for handling 
region events that are distributed over a WAN. Using a custom resolver enables 
you to handle conflicts using criteria other than, or in addition to, timestamp 
information. For example, you might always prioritize updates that originate 
from a particular site, given that the timestamp value is within a certain 
range.
-
-When a gateway sender distributes an event to another Geode site, it adds the 
distributed system ID of the local cluster, as well as a timestamp for the 
event. In a default configuration, the cluster that receives the event examines 
the timestamp to determine whether or not the event should be applied. If the 
timestamp of the update is earlier than the local timestamp, the cluster 
discards the event. If the timestamp is the same as the local timestamp, then 
the entry having the highest distributed system ID is applied (or kept).
-
-You can override the default consistency checking for WAN events by installing 
a conflict resolver plug-in for the region. If a conflict resolver is 
installed, then any event that can potentially cause a conflict (any event that 
originated from a different distributed system ID than the ID that last 
modified the entry) is delivered to the conflict resolver. The resolver plug-in 
then makes the sole determination for which update to apply or keep.
-
-See "Implementing a GatewayConflictResolver" under [Resolving Conflicting 
Events](../events/resolving_multisite_conflicts.html#topic_E97BB68748F14987916CD1A50E4B4542)
 to configure a custom resolver.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/distributed_regions/how_replication_works.html.md.erb
----------------------------------------------------------------------
diff --git a/developing/distributed_regions/how_replication_works.html.md.erb 
b/developing/distributed_regions/how_replication_works.html.md.erb
deleted file mode 100644
index 73bc5e1..0000000
--- a/developing/distributed_regions/how_replication_works.html.md.erb
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title:  How Replication and Preloading Work
----
-
-To work with replicated and preloaded regions, you should understand how their 
data is initialized and maintained in the cache.
-
-<a id="how_replication_works__section_C75BB463A0584491ABD982A55E5A050F"></a>
-Replicated and preloaded regions are configured by using one of the 
`REPLICATE` region shortcut settings, or by setting the region attribute 
`data-policy` to `replicate`, `persistent-replicate`, or `preloaded`.
-
-## <a id="how_replication_works__section_B4E76BBCC6104A27BC0A8ECA6B9CDF91" 
class="no-quick-link"></a>Initialization of Replicated and Preloaded Regions
-
-At region creation, the system initializes the preloaded or replicated region 
with the most complete and up-to-date data set it can find. The system uses 
these data sources to initialize the new region, following this order of 
preference:
-
-1.  Another replicated region that is already defined in the distributed 
system.
-2.  For persistent replicate only. Disk files, followed by a union of all 
copies of the region in the distributed cache.
-3.  For preloaded region only. Another preloaded region that is already 
defined in the distributed system.
-4.  The union of all copies of the region in the distributed cache.
-
-<img src="../../images_svg/distributed_replica_preload.svg" 
id="how_replication_works__image_5F50EBA30CE3408091F07A198F821741" 
class="image" />
-
-While a region is being initialized from a replicated or preloaded region, if 
the source region crashes, the initialization starts over.
-
-If a union of regions is used for initialization, as in the figure, and one of 
the individual source regions goes away during the initialization (due to cache 
closure, member crash, or region destruction), the new region may contain a 
partial data set from the crashed source region. When this happens, there is no 
warning logged or exception thrown. The new region still has a complete set of 
the remaining members' regions.
-
-## <a id="how_replication_works__section_6BE7555A711E4CA490B02E58B5DDE396" 
class="no-quick-link"></a>Behavior of Replicated and Preloaded Regions After 
Initialization
-
-Once initialized, the preloaded region operates like the region with a 
`normal` `data-policy`, receiving distributions only for entries it has defined 
in the local cache.
-
-<img src="../../images_svg/distributed_preload.svg" 
id="how_replication_works__image_994CA599B1004D3F95E1BB7C4FAC2AEF" 
class="image" />
-
-If the region is configured as a replicated region, it receives all new 
creations in the distributed region from the other members. This is the push 
distribution model. Unlike the preloaded region, the replicated region has a 
contract that states it will hold all entries that are present anywhere in the 
distributed region.
-
-<img src="../../images_svg/distributed_replica.svg" 
id="how_replication_works__image_2E7F3EB6213A47FEA3ABE32FD2CB1503" 
class="image" />
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/distributed_regions/locking_in_global_regions.html.md.erb
----------------------------------------------------------------------
diff --git 
a/developing/distributed_regions/locking_in_global_regions.html.md.erb 
b/developing/distributed_regions/locking_in_global_regions.html.md.erb
deleted file mode 100644
index 6a6e030..0000000
--- a/developing/distributed_regions/locking_in_global_regions.html.md.erb
+++ /dev/null
@@ -1,92 +0,0 @@
----
-title:  Locking in Global Regions
----
-
-In global regions, the system locks entries and the region during updates. You 
can also explicitly lock the region and its entries as needed by your 
application. Locking includes system settings that help you optimize 
performance and locking behavior between your members.
-
-<a 
id="locking_in_global_regions__section_065B3A57CCCA4F17821D170A312B6675"></a>
-In regions with global scope, locking helps ensure cache consistency.
-
-Locking of regions and entries is done in two ways:
-
-1.  **Implicit**. Geode automatically locks global regions and their data 
entries during most operations. Region invalidation and destruction do not 
acquire locks.
-2.  **Explicit**. You can use the API to explicitly lock the region and its 
entries. Do this to guarantee atomicity in tasks with multi-step distributed 
operations. The `Region` methods 
`org.apache.geode.cache.Region.getDistributedLock` and 
`org.apache.geode.cache.Region.getRegionDistributedLock` return instances of 
`java.util.concurrent.locks.Lock` for a region and a specified key.
-
-    **Note:**
-    You must use the `Region` API to lock regions and region entries. Do not 
use the `DistributedLockService` in the `org.apache.geode.distributed` package. 
That service is available only for locking in arbitrary distributed 
applications. It is not compatible with the `Region` locking methods.
-
-## <a id="locking_in_global_regions__section_5B47F9C5C27A4B789A3498AC553BB1FB" 
class="no-quick-link"></a>Lock Timeouts
-
-Getting a lock on a region or entry is a two-step process of getting a lock 
instance for the entity and then using the instance to set the lock. Once you 
have the lock, you hold it for your operations, then release it for someone 
else to use. You can set limits on the time spent waiting to get a lock and the 
time spent holding it. Both implicit and explicit locking operations are 
affected by the timeouts:
-
--   The lock timeout limits the wait to get a lock. The cache attribute 
`lock-timeout` governs implicit lock requests. For explicit locking, specify 
the wait time through your calls to the instance of 
`java.util.concurrent.locks.Lock` returned from the `Region` API. You can wait 
a specific amount of time, return immediately either with or without the lock, 
or wait indefinitely.
-
-    ``` pre
-    <cache lock-timeout="60"> 
-    </cache>
-    ```
-
-    gfsh:
-
-    ``` pre
-    gfsh>alter runtime --lock-timeout=60 
-    ```
-
--   The lock lease limits how long a lock can be held before it is 
automatically released. A timed lock allows the application to recover when a 
member fails to release an obtained lock within the lease time. For all 
locking, this timeout is set with the cache attribute `lock-lease`.
-
-    ``` pre
-    <cache lock-lease="120"> </cache>
-    ```
-
-    gfsh:
-
-    ``` pre
-    gfsh>alter runtime --lock-lease=120
-    ```
-
-## <a id="locking_in_global_regions__section_031727F04D114B42944872360A386907" 
class="no-quick-link"></a>Optimize Locking Performance
-
-For each global region, one of the members with the region defined will be 
assigned the job of lock grantor. The lock grantor runs the lock service that 
receives lock requests from system members, queues them as needed, and grants 
them in the order received.
-
-The lock grantor is at a slight advantage over other members as it is the only 
one that does not have to send a message to request a lock. The grantor’s 
requests cost the least for the same reason. Thus, you can optimize locking in 
a region by assigning lock grantor status to the member that acquires the most 
locks. This may be the member that performs the most puts and thus requires the 
most implicit locks or this may be the member that performs many explicit locks.
-
-The lock grantor is assigned as follows:
-
--   Any member with the region defined that requests lock grantor status is 
assigned it. Thus at any time, the most recent member to make the request is 
the lock grantor.
--   If no member requests lock grantor status for a region, or if the current 
lock grantor goes away, the system assigns a lock grantor from the members that 
have the region defined in their caches.
-
-You can request lock grantor status:
-
-1.  At region creation through the `is-lock-grantor` attribute. You can 
retrieve this attribute through the region method, `getAttributes`, to see 
whether you requested to be lock grantor for the region.
-    **Note:**
-    The `is-lock-grantor` attribute does not change after region creation.
-
-2.  After region creation through the region `becomeLockGrantor` method. 
Changing lock grantors should be done with care, however, as doing so takes 
cycles from other operations. In particular, be careful to avoid creating a 
situation where you have members vying for lock grantor status.
-
-## <a id="locking_in_global_regions__section_34661E38DFF9420B89C1A2B25F232D53" 
class="no-quick-link"></a>Examples
-
-These two examples show entry locking and unlocking. Note how the entry’s 
`Lock` object is obtained and then its lock method invoked to actually set the 
lock. The example program stores the entry lock information in a hash table for 
future reference.
-
-``` pre
-/* Lock a data entry */ 
-HashMap lockedItemsMap = new HashMap(); 
-...
-  String entryKey = ... 
-  if (!lockedItemsMap.containsKey(entryKey)) 
-  { 
-    Lock lock = this.currRegion.getDistributedLock(entryKey); 
-    lock.lock(); 
-    lockedItemsMap.put(name, lock); 
-  } 
-  ...
-```
-
-``` pre
-/* Unlock a data entry */ 
-  String entryKey = ... 
-  if (lockedItemsMap.containsKey(entryKey)) 
-  { 
-    Lock lock = (Lock) lockedItemsMap.remove(name);
-    lock.unlock();
-  }
-```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/distributed_regions/managing_distributed_regions.html.md.erb
----------------------------------------------------------------------
diff --git 
a/developing/distributed_regions/managing_distributed_regions.html.md.erb 
b/developing/distributed_regions/managing_distributed_regions.html.md.erb
deleted file mode 100644
index f36d8ca..0000000
--- a/developing/distributed_regions/managing_distributed_regions.html.md.erb
+++ /dev/null
@@ -1,47 +0,0 @@
----
-title:  Configure Distributed, Replicated, and Preloaded Regions
----
-
-Plan the configuration and ongoing management of your distributed, replicated, 
and preloaded regions, and configure the regions.
-
-<a 
id="configure_distributed_region__section_11E9E1B3EB5845D9A4FB226A992B8D0D"></a>
-Before you begin, understand [Basic Configuration and 
Programming](../../basic_config/book_intro.html).
-
-1.  Choose the region shortcut setting that most closely matches your region 
configuration. See **`org.apache.geode.cache.RegionShortcut`** or [Region 
Shortcuts](../../reference/topics/chapter_overview_regionshortcuts.html#concept_ymp_rkz_4dffhdfhk).
 To create a replicated region, use one of the `REPLICATE` shortcut settings. 
To create a preloaded region, set your region `data-policy` to `preloaded`. 
This `cache.xml` declaration creates a replicated region:
-
-    ``` pre
-    <region-attributes refid="REPLICATE"> 
-    </region-attributes>
-    ```
-
-    You can also use gfsh to configure a region. For example:
-
-    ``` pre
-    gfsh>create region --name=regionA --type=REPLICATE
-    ```
-
-    See [Region Types](../region_options/region_types.html#region_types).
-
-2.  Choose the level of distribution for your region. The region shortcuts in 
`RegionShortcut` for distributed regions use `distributed-ack` scope. If you 
need a different scope, set the `region-attributes` `scope` to 
`distributed-no-ack` or `global`.
-
-    Example:
-
-    ``` pre
-    <region-attributes refid="REPLICATE" scope="distributed-no-ack"> 
-    </region-attributes>
-    ```
-
-3.  If you are using the `distributed-ack` scope, optionally enable 
concurrency checks for the region.
-
-    Example:
-
-    ``` pre
-    <region-attributes refid="REPLICATE" scope="distributed-ack" 
concurrency-checks-enabled="true"> 
-    </region-attributes>
-    ```
-
-4.  If you are using `global` scope, program any explicit locking you need in 
addition to the automated locking provided by Geode.
-
-## <a 
id="configure_distributed_region__section_6F53FB58B8A84D0F8086AFDB08A649F9" 
class="no-quick-link"></a>Local Destroy and Invalidate in the Replicated Region
-
-Of all the operations that affect the local cache only, only local region 
destroy is allowed in a replicated region. Other operations are not 
configurable or throw exceptions. For example, you cannot use local destroy as 
the expiration action on a replicated region. This is because local operations 
like entry invalidation and destruction remove data from the local cache only. 
A replicated region would no longer be complete if data were removed locally 
but left intact.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/distributed_regions/region_entry_versions.html.md.erb
----------------------------------------------------------------------
diff --git a/developing/distributed_regions/region_entry_versions.html.md.erb 
b/developing/distributed_regions/region_entry_versions.html.md.erb
deleted file mode 100644
index 1781fc7..0000000
--- a/developing/distributed_regions/region_entry_versions.html.md.erb
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Consistency for Region Updates
----
-
-<a id="topic_CF2798D3E12647F182C2CEC4A46E2045"></a>
-
-
-Geode ensures that all copies of a region eventually reach a consistent state 
on all members and clients that host the region, including Geode members that 
distribute region events.
-
--   **[Consistency Checking by Region 
Type](../../developing/distributed_regions/how_region_versioning_works.html#topic_7A4B6C6169BD4B1ABD356294F744D236)**
-
-    Geode performs different consistency checks depending on the type of 
region you have configured.
-
--   **[Configuring Consistency 
Checking](../../developing/distributed_regions/how_region_versioning_works.html#topic_B64891585E7F4358A633C792F10FA23E)**
-
-    Geode enables consistency checking by default. You cannot disable 
consistency checking for persistent regions. For all other regions, you can 
explicitly enable or disable consistency checking by setting the 
`concurrency-checks-enabled` region attribute in `cache.xml` to "true" or 
"false."
-
--   **[Overhead for Consistency 
Checks](../../developing/distributed_regions/how_region_versioning_works.html#topic_0BDACA590B2C4974AC9C450397FE70B2)**
-
-    Consistency checking requires additional overhead for storing and 
distributing version and timestamp information, as well as for maintaining 
destroyed entries for a period of time to meet consistency requirements.
-
--   **[How Consistency Checking Works for Replicated 
Regions](../../developing/distributed_regions/how_region_versioning_works.html#topic_C5B74CCDD909403C815639339AA03758)**
-
-    Each region stores version and timestamp information for use in conflict 
detection. Geode members use the recorded information to detect and resolve 
conflicts consistently before applying a distributed update.
-
--   **[How Destroy and Clear Operations Are 
Resolved](../../developing/distributed_regions/how_region_versioning_works.html#topic_321B05044B6641FCAEFABBF5066BD399)**
-
-    When consistency checking is enabled for a region, a Geode member does not 
immediately remove an entry from the region when an application destroys the 
entry. Instead, the member retains the entry with its current version stamp for 
a period of time in order to detect possible conflicts with operations that 
have occurred. The retained entry is referred to as a *tombstone*. Geode 
retains tombstones for partitioned regions and non-replicated regions as well 
as for replicated regions, in order to provide consistency.
-
--   **[Transactions with Consistent 
Regions](../../developing/distributed_regions/how_region_versioning_works.html#topic_32ACFA5542C74F3583ECD30467F352B0)**
-
-    A transaction that modifies a region having consistency checking enabled 
generates all necessary version information for region updates when the 
transaction commits.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/events/cache_event_handler_examples.html.md.erb
----------------------------------------------------------------------
diff --git a/developing/events/cache_event_handler_examples.html.md.erb 
b/developing/events/cache_event_handler_examples.html.md.erb
deleted file mode 100644
index 858003d..0000000
--- a/developing/events/cache_event_handler_examples.html.md.erb
+++ /dev/null
@@ -1,138 +0,0 @@
----
-title:  Cache Event Handler Examples
----
-
-Some examples of cache event handlers.
-
-## <a 
id="cache_event_handler_examples__section_F2790678E9DE4A81B73A4B6346CB210B" 
class="no-quick-link"></a>Declaring and Loading an Event Handler with Parameters
-
-This declares an event handler for a region in the `cache.xml`. The handler is 
a cache listener designed to communicate changes to a DB2 database. The 
declaration includes the listener’s parameters, which are the database path, 
username, and password.
-
-``` pre
-<region name="exampleRegion"> 
-  <region-attributes> 
-  . . . 
-    <cache-listener> 
-      <class-name>JDBCListener</class-name> 
-      <parameter name="url"> 
-        <string>jdbc:db2:SAMPLE</string> 
-      </parameter> 
-      <parameter name="username"> 
-        <string>gfeadmin</string> 
-      </parameter> 
-      <parameter name="password"> 
-        <string>admin1</string> 
-      </parameter> 
-    </cache-listener> 
-  </region-attributes> 
-  </region>
-```
-
-This code listing shows part of the implementation of the `JDBCListener` 
declared in the `cache.xml`. This listener implements the `Declarable` 
interface. When an entry is created in the cache, this listener’s 
`afterCreate` callback method is triggered to update the database. Here the 
listener’s properties, provided in the `cache.xml`, are passed into the 
`Declarable.init` method and used to create a database connection.
-
-``` pre
-. . .
-public class JDBCListener
-extends CacheListenerAdapter
-implements Declarable {
-  public void afterCreate(EntryEvent e) {
-  . . .
-    // Initialize the database driver and connection using input parameters
-    Driver driver = (Driver) Class.forName(DRIVER_NAME).newInstance();
-    Connection connection =
-      DriverManager.getConnection(_url, _username, _password);
-      System.out.println(_connection);
-        . . .
-  }
-    . . .
-  public void init(Properties props) {
-    this._url = props.getProperty("url");
-    this._username = props.getProperty("username");
-    this._password = props.getProperty("password");
-  }
-}
-```
-
-## <a 
id="cache_event_handler_examples__section_2B4275C1AE744794AAD22530E5ECA8CC" 
class="no-quick-link"></a>Installing an Event Handler Through the API
-
-This listing defines a cache listener using the `RegionFactory` method 
`addCacheListener`.
-
-``` pre
-Region newReg = cache.createRegionFactory()
-          .addCacheListener(new SimpleCacheListener())
-          .create(name);
- 
-```
-
-You can create a cache writer similarly, using the `RegionFactory` method 
`setCacheWriter`, like this:
-
-``` pre
-Region newReg = cache.createRegionFactory()
-          .setCacheWriter(new SimpleCacheWriter())
-          .create(name);
- 
-```
-
-## <a 
id="cache_event_handler_examples__section_C62E9535C43B4BC5A7AA7B8B4125D1EB" 
class="no-quick-link"></a>Installing Multiple Listeners on a Region
-
-XML:
-
-``` pre
-<region name="exampleRegion">
-  <region-attributes>
-    . . .
-    <cache-listener>
-      <class-name>myCacheListener1</class-name>
-    </cache-listener>
-    <cache-listener>
-      <class-name>myCacheListener2</class-name>
-    </cache-listener>
-    <cache-listener>
-      <class-name>myCacheListener3</class-name>
-    </cache-listener>
-  </region-attributes>
-</region>
-```
-
-API:
-
-``` pre
-CacheListener listener1 = new myCacheListener1(); 
-CacheListener listener2 = new myCacheListener2(); 
-CacheListener listener3 = new myCacheListener3(); 
-
-Region nr = cache.createRegionFactory()
-  .initCacheListeners(new CacheListener[]
-    {listener1, listener2, listener3})
-  .setScope(Scope.DISTRIBUTED_NO_ACK)
-  .create(name);
-```
-
-## <a 
id="cache_event_handler_examples__section_3AF3D7C9927F491F8BACDB72834E42AA" 
class="no-quick-link"></a>Installing a Write-Behind Cache Listener
-
-``` pre
-//AsyncEventQueue with listener that performs WBCL work
-<cache>
-   <async-event-queue id="sampleQueue" persistent="true"
-    disk-store-name="exampleStore" parallel="false">
-      <async-event-listener>
-         <class-name>MyAsyncListener</class-name>
-         <parameter name="url"> 
-           <string>jdbc:db2:SAMPLE</string> 
-         </parameter> 
-         <parameter name="username"> 
-           <string>gfeadmin</string> 
-         </parameter> 
-         <parameter name="password"> 
-           <string>admin1</string> 
-         </parameter> 
-               </async-event-listener>
-             </async-event-queue>
-
-// Add the AsyncEventQueue to region(s) that use the WBCL
-  <region name="data">
-       <region-attributes async-event-queue-ids="sampleQueue">
-    </region-attributes>
-  </region>
-</cache>
-```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/events/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/developing/events/chapter_overview.html.md.erb 
b/developing/events/chapter_overview.html.md.erb
deleted file mode 100644
index 52e1905..0000000
--- a/developing/events/chapter_overview.html.md.erb
+++ /dev/null
@@ -1,27 +0,0 @@
----
-title:  Events and Event Handling
----
-
-Geode provides versatile and reliable event distribution and handling for your 
cached data and system member events.
-
--   **[How Events Work](../../developing/events/how_events_work.html)**
-
-    Members in your Geode distributed system receive cache updates from other 
members through cache events. The other members can be peers to the member, 
clients or servers or other distributed systems.
-
--   **[Implementing Geode Event 
Handlers](../../developing/events/event_handler_overview.html)**
-
-    You can specify event handlers for region and region entry operations and 
for administrative events.
-
--   **[Configuring Peer-to-Peer Event 
Messaging](../../developing/events/configure_p2p_event_messaging.html)**
-
-    You can receive events from distributed system peers for any region that 
is not a local region. Local regions receive only local cache events.
-
--   **[Configuring Client/Server Event 
Messaging](../../developing/events/configure_client_server_event_messaging.html)**
-
-    You can receive events from your servers for server-side cache events and 
query result changes.
-
--   **[Configuring Multi-Site (WAN) Event 
Queues](../../developing/events/configure_multisite_event_messaging.html)**
-
-    In a multi-site (WAN) installation, Geode uses gateway sender queues to 
distribute events for regions that are configured with a gateway sender. 
AsyncEventListeners also use an asynchronous event queue to distribute events 
for configured regions. This section describes additional options for 
configuring the event queues that are used by gateway senders or 
AsyncEventListener implementations.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/events/configure_client_server_event_messaging.html.md.erb
----------------------------------------------------------------------
diff --git 
a/developing/events/configure_client_server_event_messaging.html.md.erb 
b/developing/events/configure_client_server_event_messaging.html.md.erb
deleted file mode 100644
index 2d6185e..0000000
--- a/developing/events/configure_client_server_event_messaging.html.md.erb
+++ /dev/null
@@ -1,64 +0,0 @@
----
-title:  Configuring Client/Server Event Messaging
----
-
-You can receive events from your servers for server-side cache events and 
query result changes.
-
-<a 
id="receiving_events_from_servers__section_F21FB253CCC244708CB953B6D5866A91"></a>
-For cache updates, you can configure to receive entry keys and values or just 
entry keys, with the data retrieved lazily when requested. The queries are run 
continuously against server cache events, with the server sending the deltas 
for your query result sets.
-
-Before you begin, set up your client/server installation and configure and 
program your basic event messaging.
-
-Servers receive updates for all entry events in their client's client regions.
-
-To receive entry events in the client from the server:
-
-1.  Set the client pool `subscription-enabled` to true. See 
[&lt;pool&gt;](../../reference/topics/client-cache.html#cc-pool).
-2.  Program the client to register interest in the entries you need.
-
-    **Note:**
-    This must be done through the API.
-
-    Register interest in all keys, a key list, individual keys, or by 
comparing key strings to regular expressions. By default, no entries are 
registered to receive updates. Specify whether the server is to send values 
with entry update events. Interest registration is only available through the 
API.
-
-    1.  Get an instance of the region where you want to register interest.
-    2.  Use the regions's `registerInterest`\* methods to specify the entries 
you want. Examples:
-
-        ``` pre
-        // Register interest in a single key and download its entry 
-        // at this time, if it is available in the server cache 
-        Region region1 = . . . ;
-        region1.registerInterest("key-1"); 
-                            
-        // Register Interest in a List of Keys but do not do an initial bulk 
load
-        // do not send values for creater/update events - just send key with 
invalidation
-        Region region2 = . . . ; 
-        List list = new ArrayList();
-        list.add("key-1"); 
-        list.add("key-2"); 
-        list.add("key-3"); 
-        list.add("key-4");
-        region2.registerInterest(list, InterestResultPolicy.NONE, false); 
-                            
-        // Register interest in all keys and download all available keys now
-        Region region3 = . . . ;
-        region3.registerInterest("ALL_KEYS", InterestResultPolicy.KEYS); 
-                            
-        // Register Interest in all keys matching a regular expression 
-        Region region1 = . . . ; 
-        region1.registerInterestRegex("[a-zA-Z]+_[0-9]+"); 
-        ```
-
-        You can call the register interest methods multiple times for a single 
region. Each interest registration adds to the server’s list of registered 
interest criteria for the client. So if a client registers interest in key 
‘A’, then registers interest in regular expression "B\*", the server will 
send updates for all entries with key ‘A’ or key beginning with the letter 
‘B’.
-
-    3.  For highly available event messaging, configure server redundancy. See 
[Configuring Highly Available 
Servers](configuring_highly_available_servers.html).
-    4.  To have events enqueued for your clients during client downtime, 
configure durable client/server messaging.
-    5.  Write any continuous queries (CQs) that you want to run to receive 
continuously streaming updates to client queries. CQ events do not update the 
client cache. If you have dependencies between CQs and/or interest 
registrations, so that you want the two types of subscription events to arrive 
as closely together on the client, use a single server pool for everything. 
Using different pools can lead to time differences in the delivery of events 
because the pools might use different servers to process and deliver the event 
messages.
-
--   **[Configuring Highly Available 
Servers](../../developing/events/configuring_highly_available_servers.html)**
-
--   **[Implementing Durable Client/Server 
Messaging](../../developing/events/implementing_durable_client_server_messaging.html)**
-
--   **[Tuning Client/Server Event 
Messaging](../../developing/events/tune_client_server_event_messaging.html)**
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/events/configure_multisite_event_messaging.html.md.erb
----------------------------------------------------------------------
diff --git a/developing/events/configure_multisite_event_messaging.html.md.erb 
b/developing/events/configure_multisite_event_messaging.html.md.erb
deleted file mode 100644
index 6bd1e8b..0000000
--- a/developing/events/configure_multisite_event_messaging.html.md.erb
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title:  Configuring Multi-Site (WAN) Event Queues
----
-
-In a multi-site (WAN) installation, Geode uses gateway sender queues to 
distribute events for regions that are configured with a gateway sender. 
AsyncEventListeners also use an asynchronous event queue to distribute events 
for configured regions. This section describes additional options for 
configuring the event queues that are used by gateway senders or 
AsyncEventListener implementations.
-
-<a 
id="configure_multisite_event_messaging__section_1BBF77E166E84F7CA110385FD03D8453"></a>
-Before you begin, set up your multi-site (WAN) installation or configure 
asynchronous event queues and AsyncEventListener implementations. See 
[Configuring a Multi-site (WAN) 
System](../../topologies_and_comm/multi_site_configuration/setting_up_a_multisite_system.html#setting_up_a_multisite_system)
 or [Implementing an AsyncEventListener for Write-Behind Cache Event 
Handling](implementing_write_behind_event_handler.html#implementing_write_behind_cache_event_handling).
-
--   **[Persisting an Event 
Queue](../../developing/events/configuring_highly_available_gateway_queues.html)**
-
-    You can configure a gateway sender queue or an asynchronous event queue to 
persist data to disk similar to the way in which replicated regions are 
persisted.
-
--   **[Configuring Dispatcher Threads and Order Policy for Event 
Distribution](../../developing/events/configuring_gateway_concurrency_levels.html)**
-
-    By default, Geode uses multiple dispatcher threads to process region 
events simultaneously in a gateway sender queue for distribution between sites, 
or in an asynchronous event queue for distributing events for write-behind 
caching. With serial queues, you can also configure the ordering policy for 
dispatching those events.
-
--   **[Conflating Events in a 
Queue](../../developing/events/conflate_multisite_gateway_queue.html)**
-
-    Conflating a queue improves distribution performance. When conflation is 
enabled, only the latest queued value is sent for a particular key.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/events/configure_p2p_event_messaging.html.md.erb
----------------------------------------------------------------------
diff --git a/developing/events/configure_p2p_event_messaging.html.md.erb 
b/developing/events/configure_p2p_event_messaging.html.md.erb
deleted file mode 100644
index 73c7d74..0000000
--- a/developing/events/configure_p2p_event_messaging.html.md.erb
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title:  Configuring Peer-to-Peer Event Messaging
----
-
-You can receive events from distributed system peers for any region that is 
not a local region. Local regions receive only local cache events.
-
-<a 
id="configuring_event_distribution__section_7D5B1F0C0EF24E58BB3C335CB4EA9A3C"></a>
-Peer distribution is done according to the region's configuration.
-
--   Replicated regions always receive all events from peers and require no 
further configuration. Replicated regions are configured using the `REPLICATE` 
region shortcut settings.
--   For non-replicated regions, decide whether you want to receive all entry 
events from the distributed cache or only events for the data you have stored 
locally. To configure:
-    -   To receive all events, set the `subscription-attributes` 
`interest-policy` to `all`:
-
-        ``` pre
-        <region-attributes> 
-            <subscription-attributes interest-policy="all"/> 
-        </region-attributes>
-        ```
-
-    -   To receive events just for the data you have stored locally, set the 
`subscription-attributes` `interest-policy` to `cache-content` or do not set it 
(`cache-content` is the default):
-
-        ``` pre
-        <region-attributes> 
-            <subscription-attributes interest-policy="cache-content"/> 
-        </region-attributes>
-        ```
-
-    For partitioned regions, this only affects the receipt of events, as the 
data is stored according to the region partitioning. Partitioned regions with 
interest policy of `all` can create network bottlenecks, so if you can, run 
listeners in every member that hosts the partitioned region data and use the 
`cache-content` interest policy.
-
-**Note:**
-You can also configure Regions using the gfsh command-line interface. See 
[Region 
Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_EF03119A40EE492984F3B6248596E1DD).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/events/configuring_gateway_concurrency_levels.html.md.erb
----------------------------------------------------------------------
diff --git 
a/developing/events/configuring_gateway_concurrency_levels.html.md.erb 
b/developing/events/configuring_gateway_concurrency_levels.html.md.erb
deleted file mode 100644
index 5d001c3..0000000
--- a/developing/events/configuring_gateway_concurrency_levels.html.md.erb
+++ /dev/null
@@ -1,141 +0,0 @@
----
-title:  Configuring Dispatcher Threads and Order Policy for Event Distribution
----
-
-By default, Geode uses multiple dispatcher threads to process region events 
simultaneously in a gateway sender queue for distribution between sites, or in 
an asynchronous event queue for distributing events for write-behind caching. 
With serial queues, you can also configure the ordering policy for dispatching 
those events.
-
-By default, a gateway sender queue or asynchronous event queue uses 5 
dispatcher threads per queue. This provides support for applications that have 
the ability to process queued events concurrently for distribution to another 
Geode site or listener. If your application does not require concurrent 
distribution, or if you do not have enough resources to support the 
requirements of multiple dispatcher threads, then you can configure a single 
dispatcher thread to process a queue.
-
--   [Using Multiple Dispatcher Threads to Process a 
Queue](configuring_gateway_concurrency_levels.html#concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_20E8EFCE89EB4DC7AA822D03C8E0F470)
--   [Performance and Memory 
Considerations](configuring_gateway_concurrency_levels.html#concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_C4C83B5C0FDD4913BA128365EE7E4E35)
--   [Configuring the Ordering Policy for Serial 
Queues](configuring_gateway_concurrency_levels.html#concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_4835BA30CDFD4B658BD2576F6BC2E23F)
--   [Examples—Configuring Dispatcher Threads and Ordering Policy for a 
Serial Gateway Sender 
Queue](configuring_gateway_concurrency_levels.html#concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_752F08F9064B4F67A80DA0A994671EA0)
-
-## <a 
id="concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_20E8EFCE89EB4DC7AA822D03C8E0F470"
 class="no-quick-link"></a>Using Multiple Dispatcher Threads to Process a Queue
-
-When multiple dispatcher threads are configured for a parallel queue, Geode 
simply uses multiple threads to process the contents of each individual queue. 
The total number of queues that are created is still determined by the number 
of Geode members that host the region.
-
-When multiple dispatcher threads are configured for a serial queue, Geode 
creates an additional copy of the queue for each thread on each member that 
hosts the queue. To obtain the maximum throughput, increase the number of 
dispatcher threads until your network is saturated.
-
-The following diagram illustrates a serial gateway sender queue that is 
configured with multiple dispatcher threads.
-<img src="../../images/MultisiteConcurrency_WAN_Gateway.png" 
id="concept_6C52A037E39E4FD6AE4C6A982A4A1A85__image_093DAC58EBEE456485562C92CA79899F"
 class="image" width="624" />
-
-## <a 
id="concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_C4C83B5C0FDD4913BA128365EE7E4E35"
 class="no-quick-link"></a>Performance and Memory Considerations
-
-When a serial gateway sender or an asynchronous event queue uses multiple 
dispatcher threads, consider the following:
-
--   Queue attributes are repeated for each copy of the queue that is created 
for a dispatcher thread. That is, each concurrent queue points to the same disk 
store, so the same disk directories are used. If persistence is enabled and 
overflow occurs, the threads that insert entries into the queues compete for 
the disk. This applies to application threads and dispatcher threads, so it can 
affect application performance.
--   The `maximum-queue-memory` setting applies to each copy of the serial 
queue. If you configure 10 dispatcher threads and the maximum queue memory is 
set to 100MB, then the total maximum queue memory for the queue is 1000MB on 
each member that hosts the queue.
-
-## <a 
id="concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_4835BA30CDFD4B658BD2576F6BC2E23F"
 class="no-quick-link"></a>Configuring the Ordering Policy for Serial Queues
-
-When using multiple `dispatcher-threads` (greater than 1) with a serial event 
queue, you can also configure the `order-policy` that those threads use to 
distribute events from the queue. The valid order policy values are:
-
--   **key (default)**. All updates to the same key are distributed in order. 
Geode preserves key ordering by placing all updates to the same key in the same 
dispatcher thread queue. You typically use key ordering when updates to entries 
have no relationship to each other, such as for an application that uses a 
single feeder to distribute stock updates to several other systems.
--   **thread**. All region updates from a given thread are distributed in 
order. Geode preserves thread ordering by placing all region updates from the 
same thread into the same dispatcher thread queue. In general, use thread 
ordering when updates to one region entry affect updates to another region 
entry.
--   **partition**. All region events that share the same partitioning key are 
distributed in order. Specify partition ordering when applications use a 
[PartitionResolver](/releases/latest/javadoc/org/apache/geode/cache/PartitionResolver.html)
 to implement [custom 
partitioning](../partitioned_regions/using_custom_partition_resolvers.html). 
With partition ordering, all entries that share the same "partitioning key" 
(RoutingObject) are placed into the same dispatcher thread queue.
-
-You cannot configure the `order-policy` for a parallel event queue, because 
parallel queues cannot preserve event ordering for regions. Only the ordering 
of events for a given partition (or in a given queue of a distributed region) 
can be preserved.
-
-## <a 
id="concept_6C52A037E39E4FD6AE4C6A982A4A1A85__section_752F08F9064B4F67A80DA0A994671EA0"
 class="no-quick-link"></a>Examples—Configuring Dispatcher Threads and 
Ordering Policy for a Serial Gateway Sender Queue
-
-To increase the number of dispatcher threads and set the ordering policy for a 
serial gateway sender, use one of the following mechanisms.
-
--   **cache.xml configuration**
-
-    ``` pre
-    <cache>
-      <gateway-sender id="NY" parallel="false" 
-       remote-distributed-system-id="1"
-       enable-persistence="true"
-       disk-store-name="gateway-disk-store"
-       maximum-queue-memory="200"
-       dispatcher-threads=7 order-policy="key"/> 
-       ... 
-    </cache>
-    ```
-
--   **Java API configuration**
-
-    ``` pre
-    Cache cache = new CacheFactory().create();
-
-    GatewaySenderFactory gateway = cache.createGatewaySenderFactory();
-    gateway.setParallel(false);
-    gateway.setPersistenceEnabled(true);
-    gateway.setDiskStoreName("gateway-disk-store");
-    gateway.setMaximumQueueMemory(200);
-    gateway.setDispatcherThreads(7);
-    gateway.setOrderPolicy(OrderPolicy.KEY);
-    GatewaySender sender = gateway.create("NY", "1");
-    sender.start();
-    ```
-
--   **gfsh:**
-
-    ``` pre
-    gfsh>create gateway-sender -d="NY" 
-       --parallel=false 
-       --remote-distributed-system-id="1"
-       --enable-persistence=true
-       --disk-store-name="gateway-disk-store"
-       --maximum-queue-memory=200
-       --dispatcher-threads=7 
-       --order-policy="key"
-    ```
-
-The following examples show how to set dispatcher threads and ordering policy 
for an asynchronous event queue:
-
--   **cache.xml configuration**
-
-    ``` pre
-    <cache>
-       <async-event-queue id="sampleQueue" persistent="true"
-        disk-store-name="async-disk-store" parallel="false"
-        dispatcher-threads=7 order-policy="key">
-          <async-event-listener>
-             <class-name>MyAsyncEventListener</class-name>
-             <parameter name="url"> 
-               <string>jdbc:db2:SAMPLE</string> 
-             </parameter> 
-             <parameter name="username"> 
-               <string>gfeadmin</string> 
-             </parameter> 
-             <parameter name="password"> 
-               <string>admin1</string> 
-             </parameter> 
-        </async-event-listener>
-        </async-event-queue>
-    ...
-    </cache>
-    ```
-
--   **Java API configuration**
-
-    ``` pre
-    Cache cache = new CacheFactory().create();
-    AsyncEventQueueFactory factory = cache.createAsyncEventQueueFactory();
-    factory.setPersistent(true);
-    factory.setDiskStoreName("async-disk-store");
-    factory.setParallel(false);
-    factory.setDispatcherThreads(7);
-    factory.setOrderPolicy(OrderPolicy.KEY);
-    AsyncEventListener listener = new MyAsyncEventListener();
-    AsyncEventQueue sampleQueue = factory.create("customerWB", listener);
-    ```
-
-    Entry updates in the current, in-process batch are not eligible for 
conflation.
-
--   **gfsh:**
-
-    ``` pre
-    gfsh>create async-event-queue --id="sampleQueue" --persistent=true
-    --disk-store="async-disk-store" --parallel=false
-    --dispatcher-threads=7 order-policy="key"
-    --listener=myAsycEventListener 
-    --listener-param=url#jdbc:db2:SAMPLE 
-    --listener-param=username#gfeadmin 
-    --listener-param=password#admin1
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/events/configuring_highly_available_gateway_queues.html.md.erb
----------------------------------------------------------------------
diff --git 
a/developing/events/configuring_highly_available_gateway_queues.html.md.erb 
b/developing/events/configuring_highly_available_gateway_queues.html.md.erb
deleted file mode 100644
index a674a45..0000000
--- a/developing/events/configuring_highly_available_gateway_queues.html.md.erb
+++ /dev/null
@@ -1,102 +0,0 @@
----
-title:  Persisting an Event Queue
----
-
-You can configure a gateway sender queue or an asynchronous event queue to 
persist data to disk similar to the way in which replicated regions are 
persisted.
-
-<a 
id="configuring_highly_available_gateway_queues__section_7EB2A7E38B074AAAA06D22C59687CB8A"></a>
-Persisting a queue provides high availability for the event messaging that the 
sender performs. For example, if a persistent gateway sender queue exits for 
any reason, when the member that hosts the sender restarts it automatically 
reloads the queue and resumes sending messages. If an asynchronous event queue 
exits for any reason, write-back caching can resume where it left off when the 
queue is brought back online.
-Geode persists an event queue if you set the `enable-persistence` attribute to 
true. The queue is persisted to the disk store specified in the queue's 
`disk-store-name` attribute, or to the default disk store if you do not specify 
a store name.
-
-You must configure the event queue to use persistence if you are using 
persistent regions. The use of non-persistent event queues with persistent 
regions is not supported.
-
-When you enable persistence for a queue, the `maximum-queue-memory` attribute 
determines how much memory the queue can consume before it overflows to disk. 
By default, this value is set to 100MB.
-
-**Note:**
-If you configure a parallel queue and/or you configure multiple dispatcher 
threads for a queue, the values that are defined in the `maximum-queue-memory` 
and `disk-store-name` attributes apply to each instance of the queue.
-
-In the example below the gateway sender queue uses "diskStoreA" for 
persistence and overflow, and the queue has a maximum queue memory of 100MB:
-
--   XML example:
-
-    ``` pre
-    <cache>
-      <gateway-sender id="persistedsender1" parallel="false" 
-       remote-distributed-system-id="1"
-       enable-persistence="true"
-       disk-store-name="diskStoreA"
-       maximum-queue-memory="100"/> 
-       ... 
-    </cache>
-    ```
-
--   API example:
-
-    ``` pre
-    Cache cache = new CacheFactory().create();
-
-    GatewaySenderFactory gateway = cache.createGatewaySenderFactory();
-    gateway.setParallel(false);
-    gateway.setPersistenceEnabled(true);
-    gateway.setDiskStoreName("diskStoreA");
-    gateway.setMaximumQueueMemory(100); 
-    GatewaySender sender = gateway.create("persistedsender1", "1");
-    sender.start();
-    ```
-
--   gfsh:
-
-    ``` pre
-    gfsh>create gateway-sender --id="persistedsender1 --parallel=false 
-    --remote-distributed-system-id=1 --enable-persistence=true 
--disk-store-name=diskStoreA 
-    --maximum-queue-memory=100
-    ```
-
-If you were to configure 10 dispatcher threads for the serial gateway sender, 
then the total maximum memory for the gateway sender queue would be 1000MB on 
each Geode member that hosted the sender, because Geode creates a separate copy 
of the queue per thread..
-
-The following example shows a similar configuration for an asynchronous event 
queue:
-
--   XML example:
-
-    ``` pre
-    <cache>
-       <async-event-queue id="persistentAsyncQueue" persistent="true"
-        disk-store-name="diskStoreA" parallel="true">
-          <async-event-listener>
-             <class-name>MyAsyncEventListener</class-name>
-             <parameter name="url"> 
-               <string>jdbc:db2:SAMPLE</string> 
-             </parameter> 
-             <parameter name="username"> 
-               <string>gfeadmin</string> 
-             </parameter> 
-             <parameter name="password"> 
-               <string>admin1</string> 
-             </parameter> 
-          </async-event-listener>
-        </async-event-queue>
-    ...
-    </cache>
-    ```
-
--   API example:
-
-    ``` pre
-    Cache cache = new CacheFactory().create();
-    AsyncEventQueueFactory factory = cache.createAsyncEventQueueFactory();
-    factory.setPersistent(true);
-    factory.setDiskStoreName("diskStoreA");
-    factory.setParallel(true);
-    AsyncEventListener listener = new MyAsyncEventListener();
-    AsyncEventQueue persistentAsyncQueue = factory.create("customerWB", 
listener);
-    ```
-
--   gfsh:
-
-    ``` pre
-    gfsh>create async-event-queue --id="persistentAsyncQueue" 
--persistent=true 
-    --disk-store="diskStoreA" --parallel=true --listener=MyAsyncEventListener 
-    --listener-param=url#jdbc:db2:SAMPLE --listener-param=username#gfeadmin 
--listener-param=password#admin1
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/developing/events/configuring_highly_available_servers.html.md.erb
----------------------------------------------------------------------
diff --git a/developing/events/configuring_highly_available_servers.html.md.erb 
b/developing/events/configuring_highly_available_servers.html.md.erb
deleted file mode 100644
index 3b80d96..0000000
--- a/developing/events/configuring_highly_available_servers.html.md.erb
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title:  Configuring Highly Available Servers
----
-
-<a 
id="configuring_highly_available_servers__section_7EB2A7E38B074AAAA06D22C59687CB8A"></a>
-With highly-available servers, one of the backups steps in and takes over 
messaging with no interruption in service if the client's primary server 
crashes.
-
-To configure high availability, set the `subscription-redundancy` in the 
client's pool configuration. This setting indicates the number of secondary 
servers to use. For example:
-
-``` pre
-<!-- Run one secondary server -->
-<pool name="red1" subscription-enabled="true" subscription-redundancy="1"> 
-  <locator host="nick" port="41111"/> 
-  <locator host="nora" port="41111"/> 
-</pool> 
-```
-
-``` pre
-<!-- Use all available servers as secondaries. One is primary, the rest are 
secondaries -->
-<pool name="redX" subscription-enabled="true" subscription-redundancy="-1"> 
-  <locator host="nick" port="41111"/> 
-  <locator host="nora" port="41111"/> 
-</pool> 
-```
-
-When redundancy is enabled, secondary servers maintain queue backups while the 
primary server pushes events to the client. If the primary server fails, one of 
the secondary servers steps in as primary to provide uninterrupted event 
messaging to the client.
-
-The following table describes the different values for the 
subscription-redundancy setting:
-
-| subscription-redundancy | Description                                        
                            |
-|-------------------------|--------------------------------------------------------------------------------|
-| 0                       | No secondary servers are configured, so high 
availability is disabled.         |
-| &gt; 0                  | Sets the precise number of secondary servers to 
use for backup to the primary. |
-| -1                      | Every server that is not the primary is to be used 
as a secondary.             |
-
--   **[Highly Available Client/Server Event 
Messaging](../../developing/events/ha_event_messaging_whats_next.html)**
-
-

Reply via email to